diff --git a/README.md b/README.md index 447f2cb..c11cf1b 100644 --- a/README.md +++ b/README.md @@ -1,59 +1,40 @@ ![UI/UX Audit Skill](https://ik.imagekit.io/dtunrco/Ox_TPIFD9IbLFwUwSkYoe4BJ6CGJWo-ba19G1ZGXNwI.png) -# UX Audit Skill — Installation Guide +# UX Audit Skill -## Prerequisites +Run structured UX/UI audits on web projects with Claude Code. Scans for hardcoded CSS, accessibility issues, and design token gaps. Outputs to reveal.js slide decks, Figma canvas, or scrollable HTML. -- **Claude Code CLI** (`claude`) — [claude.ai/code](https://claude.ai/code) -- **Figma MCP server** — configured in your Claude MCP settings. Used for `mcp__figma__*` tools throughout the audit. -- **Optics MCP server** (optional, recommended) — `mcp__optics__*` tools enable automatic token mapping and contrast checking. Without it the skill falls back to grep-based analysis. -- **Node.js 18+** — required for token generation scripts - ---- - -## Install from rolemodel-design-skills - -### 1. Clone the repo (if you haven't already) +## Quick Start ```bash -git clone git@github.com:RoleModel/rolemodel-design-skills.git /path/to/rolemodel-design-skills +git clone git@github.com:RoleModel/rolemodel-design-audit.git +cd rolemodel-design-audit +./scripts/setup.sh ``` -### 2. Symlink the skill into Claude Code - -```bash -ln -s /path/to/rolemodel-design-skills/skills/ux-audit ~/.claude/skills/ux-audit -``` +The setup script checks everything, installs what it can, and symlinks the skill into Claude Code. When it finishes, open Claude Code in any project and type `/ux-audit`. -Verify it's available — restart Claude Code and check that `/ux-audit` appears in skill completions. +## What the Setup Script Does -### 3. Configure MCP servers +1. **Checks core dependencies** — Claude Code CLI, Node.js 18+ +2. **Checks optional tools** — ripgrep (for CSS scanning), GitHub CLI (for project search) +3. **Symlinks the skill** into `~/.claude/skills/ux-audit` +4. **Checks Figma MCP** — detects the Figma plugin or manual MCP config +5. **Sets script permissions** — makes all shell scripts executable -In your `~/.claude/mcp.json` (or equivalent), ensure you have the Figma and Optics servers configured: - -```json -{ - "mcpServers": { - "figma": { - "command": "npx", - "args": ["-y", "@figma/mcp-server"], - "env": { "FIGMA_ACCESS_TOKEN": "YOUR_TOKEN" } - }, - "optics": { - "command": "npx", - "args": ["-y", "@rolemodel/optics-mcp"] - } - } -} -``` - ---- +If anything is missing, the script tells you exactly what to install and how. ## Per-project Setup -### 1. Create `.ux-audit.json` in the project root +Run `/ux-audit` in a project directory. Claude will ask a few questions and create `.ux-audit.json` for you: + +- **Audience** — `"client"` (polished, narrative) or `"internal"` (direct, technical) +- **Brand colors** — primary hue/saturation from the project +- **Design system** — defaults to `@rolemodel/optics`, works with any system +- **Figma file key** — for canvas output (optional) +- **Case study URL** — for cover images and narrative context (optional) -Run `/ux-audit` and Claude will prompt you for answers and create the file, or create it manually: +Or create `.ux-audit.json` manually: ```json { @@ -82,48 +63,46 @@ Run `/ux-audit` and Claude will prompt you for answers and create the file, or c } ``` -**Key fields:** -- `audience` — `"client"` or `"internal"`. This changes everything: tone, report structure, finding language. -- `brand.caseStudyUrl` — if set, Claude fetches the case study to ground the "Then" section in real context and pull the hero image. -- `figma.fileKey` — leave `null` to create a new file, or paste an existing key from a Figma URL. - -### 2. Run the audit +## Usage ```bash -# Full audit (all 5 phases) +# Full audit — scans, maps tokens, audits, walks findings interactively /ux-audit -# Individual phases -/ux-audit scan -/ux-audit tokens -/ux-audit accessibility +# Generate client report (after design work is done) /ux-audit report -/ux-audit figma -``` -### 3. Push to Figma +# Individual phases +/ux-audit scan # Tech stack + codebase scan +/ux-audit tokens # Design token mapping +/ux-audit audit # Heuristic audit (10 sections) +/ux-audit review # Interactive finding review +/ux-audit figma # Figma deliverables + token JSON +/ux-audit publish # Deploy to Vercel/Netlify/Surge +``` -After Phase 4 generates the HTML report: +### Shell Scripts (Automation) ```bash -/ux-audit figma -``` +# Interactive — prompts for project, audience, format +./skills/ux-audit/scripts/run-audit-agent.sh -This uses `mcp__figma__generate_figma_design` to push the report. Claude will start a local HTTP server, open the page in your browser, and poll for completion. +# Headless — for CI or automation +./skills/ux-audit/scripts/run-audit-agent.sh ~/Development/my-app --client reveal ---- +# Publish report to a URL +./skills/ux-audit/scripts/publish-report.sh +``` ## Non-Optics Projects -The skill works on any web project. When `designSystem.name` is not `"optics"`, token mapping falls back to grep-based analysis against whatever CSS/token files it finds in `node_modules` or the project. Set `designSystem.name` to match the system in use (e.g. `"tailwind"`, `"bootstrap"`, `"custom"`). - ---- +The skill works on any web project. Set `designSystem.name` to match your system (`"tailwind"`, `"bootstrap"`, `"custom"`). Token mapping falls back to grep-based analysis when Optics MCP tools aren't available. -## Updating the Skill +## Updating ```bash -cd /path/to/rolemodel-design-skills +cd rolemodel-design-audit git pull ``` -The symlink means your Claude Code installation picks up changes immediately — no reinstall needed. +The symlink means Claude Code picks up changes immediately. diff --git a/scripts/setup.sh b/scripts/setup.sh new file mode 100755 index 0000000..bec21c0 --- /dev/null +++ b/scripts/setup.sh @@ -0,0 +1,221 @@ +#!/usr/bin/env bash +# +# setup.sh — One-command setup for the UX Audit skill. +# +# Checks all prerequisites, installs what it can, symlinks the skill +# into Claude Code, and verifies MCP server configuration. +# +# Usage: +# ./scripts/setup.sh +# +# Run from the repo root (rolemodel-design-audit/). + +set -euo pipefail + +# --- Colors --- +BLUE='\033[0;34m' +GREEN='\033[0;32m' +YELLOW='\033[0;33m' +RED='\033[0;31m' +CYAN='\033[0;36m' +BOLD='\033[1m' +DIM='\033[2m' +RESET='\033[0m' + +pass() { printf "${GREEN} ✓ %s${RESET}\n" "$1"; } +warn() { printf "${YELLOW} ⚠ %s${RESET}\n" "$1"; } +fail() { printf "${RED} ✗ %s${RESET}\n" "$1"; } +info() { printf "${DIM} %s${RESET}\n" "$1"; } +header(){ printf "\n${CYAN}${BOLD} %s${RESET}\n" "$1"; } + +ERRORS=0 +WARNINGS=0 + +# --- Resolve repo root --- +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +SKILL_DIR="$REPO_ROOT/skills/ux-audit" + +if [ ! -f "$SKILL_DIR/SKILL.md" ]; then + fail "Cannot find skills/ux-audit/SKILL.md — run this script from the repo root." + exit 1 +fi + +# --- Header --- +echo "" +printf "${CYAN}${BOLD} ╔══════════════════════════════════════════╗${RESET}\n" +printf "${CYAN}${BOLD} ║ UX Audit Skill Setup ║${RESET}\n" +printf "${CYAN}${BOLD} ╚══════════════════════════════════════════╝${RESET}\n" + +# ───────────────────────────────────────────────────────────────────── +header "1. Core Dependencies" +# ───────────────────────────────────────────────────────────────────── + +# Claude Code CLI +if command -v claude &>/dev/null; then + CLAUDE_VERSION=$(claude --version 2>/dev/null || echo "unknown") + pass "Claude Code CLI ($CLAUDE_VERSION)" +else + fail "Claude Code CLI not found" + info "Install: https://claude.ai/code" + ERRORS=$((ERRORS + 1)) +fi + +# Node.js +if command -v node &>/dev/null; then + NODE_VERSION=$(node --version 2>/dev/null) + NODE_MAJOR=$(echo "$NODE_VERSION" | sed 's/v//' | cut -d. -f1) + if [ "$NODE_MAJOR" -ge 18 ] 2>/dev/null; then + pass "Node.js $NODE_VERSION" + else + warn "Node.js $NODE_VERSION (18+ recommended)" + info "Token generation scripts may not work with older versions." + WARNINGS=$((WARNINGS + 1)) + fi +else + fail "Node.js not found" + info "Install: brew install node (or https://nodejs.org)" + ERRORS=$((ERRORS + 1)) +fi + +# ───────────────────────────────────────────────────────────────────── +header "2. Optional Tools" +# ───────────────────────────────────────────────────────────────────── + +# ripgrep (used by scan-hardcoded-values.sh) +if command -v rg &>/dev/null; then + pass "ripgrep (rg) — used by scan-hardcoded-values.sh" +else + warn "ripgrep not found — scan-hardcoded-values.sh won't work" + info "Install: brew install ripgrep" + WARNINGS=$((WARNINGS + 1)) +fi + +# GitHub CLI (used by run-audit-agent.sh for project search) +if command -v gh &>/dev/null; then + if gh auth status &>/dev/null 2>&1; then + pass "GitHub CLI (gh) — authenticated" + else + warn "GitHub CLI (gh) installed but not authenticated" + info "Run: gh auth login" + WARNINGS=$((WARNINGS + 1)) + fi +else + warn "GitHub CLI not found — project search from GitHub won't work" + info "Install: brew install gh" + WARNINGS=$((WARNINGS + 1)) +fi + +# ───────────────────────────────────────────────────────────────────── +header "3. Skill Installation" +# ───────────────────────────────────────────────────────────────────── + +CLAUDE_SKILLS_DIR="$HOME/.claude/skills" +SKILL_LINK="$CLAUDE_SKILLS_DIR/ux-audit" + +if [ -L "$SKILL_LINK" ]; then + LINK_TARGET=$(readlink "$SKILL_LINK" 2>/dev/null || readlink -f "$SKILL_LINK" 2>/dev/null) + if [ "$LINK_TARGET" = "$SKILL_DIR" ]; then + pass "Skill symlinked → $SKILL_LINK" + else + warn "Skill symlink exists but points to: $LINK_TARGET" + info "Expected: $SKILL_DIR" + printf "${BLUE}${BOLD} Update symlink? ${RESET}${GREEN}[yes]${RESET}${BLUE}: ${RESET}" + read -r CONFIRM + CONFIRM="${CONFIRM:-yes}" + if [ "$CONFIRM" = "yes" ] || [ "$CONFIRM" = "y" ]; then + rm "$SKILL_LINK" + ln -s "$SKILL_DIR" "$SKILL_LINK" + pass "Symlink updated" + fi + fi +elif [ -d "$SKILL_LINK" ]; then + warn "Skill directory exists (not a symlink): $SKILL_LINK" + info "Remove it and re-run setup to use the repo version." + WARNINGS=$((WARNINGS + 1)) +else + mkdir -p "$CLAUDE_SKILLS_DIR" + ln -s "$SKILL_DIR" "$SKILL_LINK" + pass "Skill symlinked → $SKILL_LINK" +fi + +# ───────────────────────────────────────────────────────────────────── +header "4. Figma MCP Server" +# ───────────────────────────────────────────────────────────────────── + +# Check if Figma plugin is enabled in Claude Code +CLAUDE_SETTINGS="$HOME/.claude/settings.json" +FIGMA_CONFIGURED=false + +if [ -f "$CLAUDE_SETTINGS" ]; then + # Check for Figma plugin in enabledPlugins + if grep -q 'figma.*claude-plugins' "$CLAUDE_SETTINGS" 2>/dev/null; then + pass "Figma plugin enabled in Claude Code" + FIGMA_CONFIGURED=true + fi +fi + +# Also check mcp.json for manual Figma server config +CLAUDE_MCP="$HOME/.claude/mcp.json" +if [ -f "$CLAUDE_MCP" ]; then + if grep -q '"figma"' "$CLAUDE_MCP" 2>/dev/null; then + pass "Figma MCP server found in mcp.json" + FIGMA_CONFIGURED=true + fi +fi + +if ! $FIGMA_CONFIGURED; then + warn "Figma MCP not detected" + info "The Figma plugin provides screenshot, canvas write, and design inspection tools." + info "Without it, the skill still works but skips Figma-dependent features." + echo "" + info "To install the official Figma plugin:" + printf "${CYAN} claude plugin add figma${RESET}\n" + echo "" + info "Or add manually to ~/.claude/mcp.json:" + printf "${DIM} {\"mcpServers\": {\"figma\": {\"command\": \"npx\", \"args\": [\"-y\", \"@figma/mcp-server\"], \"env\": {\"FIGMA_ACCESS_TOKEN\": \"YOUR_TOKEN\"}}}}${RESET}\n" + WARNINGS=$((WARNINGS + 1)) +fi + +# ───────────────────────────────────────────────────────────────────── +header "5. Script Permissions" +# ───────────────────────────────────────────────────────────────────── + +SCRIPTS=( + "$SKILL_DIR/scripts/scan-hardcoded-values.sh" + "$SKILL_DIR/scripts/publish-report.sh" + "$SKILL_DIR/scripts/run-audit-agent.sh" + "$REPO_ROOT/scripts/setup.sh" +) + +MADE_EXECUTABLE=0 +for script in "${SCRIPTS[@]}"; do + if [ -f "$script" ] && [ ! -x "$script" ]; then + chmod +x "$script" + MADE_EXECUTABLE=$((MADE_EXECUTABLE + 1)) + fi +done + +if [ $MADE_EXECUTABLE -gt 0 ]; then + pass "Made $MADE_EXECUTABLE scripts executable" +else + pass "All scripts already executable" +fi + +# ───────────────────────────────────────────────────────────────────── +header "Summary" +# ───────────────────────────────────────────────────────────────────── + +echo "" +if [ $ERRORS -eq 0 ] && [ $WARNINGS -eq 0 ]; then + printf "${GREEN}${BOLD} All clear! The skill is ready to use.${RESET}\n" +elif [ $ERRORS -eq 0 ]; then + printf "${YELLOW}${BOLD} Ready with %d warning(s). The skill will work — some optional features may be limited.${RESET}\n" "$WARNINGS" +else + printf "${RED}${BOLD} %d error(s) and %d warning(s). Fix the errors above before using the skill.${RESET}\n" "$ERRORS" "$WARNINGS" +fi + +echo "" +printf "${DIM} To run an audit, open Claude Code in a project directory and type:${RESET}\n" +printf "${CYAN} /ux-audit${RESET}\n" +echo "" diff --git a/skills/ux-audit/AGENT.md b/skills/ux-audit/AGENT.md index 7113b25..ffcadba 100644 --- a/skills/ux-audit/AGENT.md +++ b/skills/ux-audit/AGENT.md @@ -11,28 +11,33 @@ The agent runs in a project directory and executes the full `/ux-audit` skill wo 1. Detects tech stack and scans for hardcoded CSS values 2. Maps values to the target design system (Optics MCP if available) 3. Runs static accessibility analysis -4. Generates an HTML audit report from the appropriate template -5. Optionally pushes deliverables to Figma and generates DTCG token JSON +4. Generates the audit report (reveal.js slide deck, Figma canvas, or scrollable HTML) +5. Optionally publishes to Vercel/Netlify/Surge and generates DTCG token JSON -At the end it reports the output file path and any Figma URLs. +At the end it reports the output file path, published URL, and any Figma URLs. --- ## Shell Agent -The simplest form. Uses the Claude Code CLI in non-interactive mode. +The simplest form. Uses the Claude Code CLI in non-interactive mode. Supports both **interactive** (prompted) and **headless** (all args) modes. ```bash -# scripts/run-audit-agent.sh — already included in this skill -./scripts/run-audit-agent.sh [project-dir] [--client|--internal] [phase] +# Interactive — prompts for project dir, audience, format, phase +./scripts/run-audit-agent.sh -# Examples -./scripts/run-audit-agent.sh ~/Development/my-app --client +# Headless — all arguments provided (for CI/automation) +./scripts/run-audit-agent.sh ~/Development/my-app --client --reveal ./scripts/run-audit-agent.sh ~/Development/my-app --internal scan -./scripts/run-audit-agent.sh . --client report +./scripts/run-audit-agent.sh . --client --figma + +# Publishing — interactive prompts for provider and project name +./scripts/publish-report.sh ``` -See [scripts/run-audit-agent.sh](scripts/run-audit-agent.sh) for the implementation. +When run without arguments, both scripts prompt with colored output and sensible defaults — just press Enter to accept defaults. + +See [scripts/run-audit-agent.sh](scripts/run-audit-agent.sh) and [scripts/publish-report.sh](scripts/publish-report.sh) for implementations. --- @@ -45,14 +50,31 @@ Configure the agent using standard environment variables and the tool list below ``` You are running a UX audit on a web project. Follow the /ux-audit skill exactly. Complete all phases without asking for confirmation. Write findings based only on -what you observe in the code — never fabricate. When done, report the output file -path and any Figma URLs. +what you observe in the code — never fabricate. NEVER create or inject CSS — copy +the template HTML and its companion CSS file verbatim into the output directory. +The only style you may add is a single :root { --accent } override. + +Output format (from .ux-audit.json "format" field): +- "reveal" → Use report-template-reveal.html. Single self-contained file, no companion + CSS. Same placeholder names as client template. Supports PDF via ?print-pdf. +- "figma" → Write directly to Figma canvas via use_figma. MUST load figma-use skill + first. Duplicate the template at figma.templateKey, populate section by section. + See references/figma-workflow.md for the complete workflow. +- "html" → Scrollable HTML + companion CSS. Copy both files unchanged. + +Image sourcing: +- Cover image + client logo: web scrape from brand.portfolioUrl (look for + data-framer-background-image-wrapper elements on Framer portfolio pages) +- Section screenshots: use mcp__figma__get_screenshot when figma.fileKey is configured +- Embed as base64 data URIs for self-contained deployment, or external URLs + +When done, report the output file path, published URL (if any), and Figma URLs. ``` ### User message ``` -/ux-audit --client +/ux-audit --client --reveal ``` Or for a specific phase: @@ -61,6 +83,18 @@ Or for a specific phase: /ux-audit scan --internal ``` +For Figma canvas output: + +``` +/ux-audit --client --figma +``` + +To publish after generating: + +``` +/ux-audit publish +``` + ### Required tools The audit is primarily read-only. Provide these tools to the agent: @@ -72,8 +106,8 @@ Read, Glob, Grep # Report output (write to outputDir only) Write -# Token generation + HTTP server for Figma export -Bash(node:*), Bash(python3:*), Bash(mkdir:*), Bash(lsof:*), Bash(open:*) +# Token generation + HTTP server + publishing +Bash(node:*), Bash(npx:*), Bash(python3:*), Bash(mkdir:*), Bash(lsof:*), Bash(open:*), Bash(curl:*) # Design system token lookup + contrast checking mcp__optics__search_tokens @@ -84,15 +118,22 @@ mcp__optics__get_component_tokens mcp__optics__get_token mcp__optics__validate_token_usage -# Figma — design inspection + report export +# Figma — design inspection + canvas write + report export mcp__figma__get_design_context mcp__figma__get_screenshot mcp__figma__get_metadata -mcp__figma__generate_figma_design +mcp__figma__use_figma # Direct canvas write (MUST load figma-use skill first) +mcp__figma__search_design_system # Discover components, variables, styles +mcp__figma__create_new_file # Duplicate template for new audits +mcp__figma__generate_figma_design # HTML-to-Figma fallback mcp__figma__get_variable_defs -# Optional — for fetching case study URLs +# Web scraping — cover images, client logos, case study context WebFetch, WebSearch + +# Skills — must be loaded before specific tool calls +Skill(figma-use) # MANDATORY before every use_figma call +Skill(figma-generate-design) # For section-by-section Figma assembly ``` ### Settings @@ -108,8 +149,11 @@ cwd: # Set to the project root before invoking The only writes the agent makes: - `.ux-audit.json` — config, created on first run if missing -- `{outputDir}/ux-audit-report.html` — the report +- `{outputDir}/ux-audit-report.html` — the report (HTML/reveal format) +- `{outputDir}/report-template*.css` — companion CSS file copied verbatim (HTML format only, never modified) - `{outputDir}/light.tokens.json` + `dark.tokens.json` — DTCG token files (Phase 5 only) +- Figma canvas writes via `use_figma` (Figma format only — writes to a duplicated template file) +- Published URL output (optional, when `/ux-audit publish` is invoked) For fully autonomous runs, scope `Write` to `{outputDir}/*` and `.ux-audit.json` only. diff --git a/skills/ux-audit/SKILL.md b/skills/ux-audit/SKILL.md index c5349cf..9fd3287 100644 --- a/skills/ux-audit/SKILL.md +++ b/skills/ux-audit/SKILL.md @@ -1,20 +1,110 @@ --- name: ux-audit -description: Run a UX/UI audit on a web project. Scans for hardcoded CSS values, accessibility violations, design token inconsistencies, and component patterns. Generates an HTML report, Figma deliverables, and DTCG token JSON. Defaults to @rolemodel/optics but configurable for any design system. +description: Run a UX/UI audit on a web project. Scans for hardcoded CSS values, accessibility violations, design token inconsistencies, and component patterns. Outputs to Figma (canvas API), reveal.js slide deck, or scrollable HTML. Publish to Vercel/Netlify/Surge. Defaults to @rolemodel/optics but configurable for any design system. --- # UX/UI Audit Skill -Run structured UX/UI audits on web projects. Scan the codebase for design token inconsistencies, accessibility violations, typography issues, color system problems, and component pattern anti-patterns. Produce an HTML audit report, optional Figma deliverables, and DTCG token JSON for Figma Variables import. +Run structured UX/UI audits on web projects in **two sessions**: + +1. **Discovery & Review** — Scan the codebase, identify findings, and walk through them interactively with the team. This is an internal conversation — collaborative, iterative, and thorough. +2. **Client Report** — After design work is complete (Figma mockups, prototypes), come back to generate the polished client-facing deliverable with real screenshots, video walkthroughs, and interactive embeds. + +Output to **reveal.js** (self-contained slide deck), **Figma** (direct canvas write), or **scrollable HTML**. Publish to Vercel, Netlify, or Surge. Generate DTCG token JSON for Figma Variables import. + +## Workflow Overview + +``` +┌─────────────────────────────────────────────────┐ +│ SESSION 1: Discovery & Review │ +│ │ +│ Phase 1: Codebase scan + tech stack detection │ +│ Phase 2: Design token mapping │ +│ Phase 3: Heuristic audit (10 sections) │ +│ Phase 4: Interactive review ← YOU ARE HERE │ +│ Walk each finding with the team. │ +│ Confirm, reject, reprioritize. │ +│ Output: reviewed-findings.json │ +│ │ +│ ⏸ PAUSE — Design work happens here │ +│ Figma mockups, prototypes, redesigns │ +│ │ +│ SESSION 2: Client Report │ +│ │ +│ Phase 5: Report generation (reveal/figma/html) │ +│ Pull screenshots from Figma designs │ +│ Add video walkthrough + interactive │ +│ embed if available │ +│ Phase 6: Figma deliverables + token JSON │ +│ Phase 7: Publish to web │ +└─────────────────────────────────────────────────┘ +``` + +## Prerequisites + +Run the setup script from the repo root — it checks everything and tells you what's missing: + +```bash +./scripts/setup.sh +``` + +**What's needed:** + +| Dependency | Required? | Purpose | +|-----------|-----------|---------| +| Claude Code CLI | Yes | Runtime for the skill | +| Node.js 18+ | Yes | Token generation scripts (.mjs) | +| Figma plugin | Recommended | Screenshots, canvas writes, design inspection | +| ripgrep (`rg`) | Optional | Powers `scan-hardcoded-values.sh` | +| GitHub CLI (`gh`) | Optional | Project search from GitHub in `run-audit-agent.sh` | + +**Figma skills used by this audit:** + +| Skill | When used | Purpose | +|-------|-----------|---------| +| `figma-use` | **Every** `use_figma` call (mandatory) | Loads Plugin API context; skipping it causes hard-to-debug failures | +| `figma-generate-design` | Phase 5–6 (Figma format) | Section-by-section canvas assembly using design system tokens | +| `figma-create-new-file` | Phase 6 (template duplication) | Creates the project-specific audit file from the template | + +Without the Figma plugin, the skill still works — it skips Figma-dependent features and uses reveal.js or HTML output instead. ## Invocation -- `/ux-audit` — Full audit (all 5 phases in sequence) -- `/ux-audit scan` — Phase 1 only: tech stack detection + codebase scan -- `/ux-audit tokens` — Phase 2 only: design token mapping -- `/ux-audit accessibility` — Phase 3 only: accessibility audit -- `/ux-audit report` — Phase 4 only: generate HTML report -- `/ux-audit figma` — Phase 5 only: Figma deliverables + token JSON +Most users only need two commands — one for each session: + +| Command | What it does | +|---------|-------------| +| `/ux-audit` | **Session 1**: Scans the codebase, maps tokens, runs the heuristic audit, then walks you through each finding interactively. Ends with a reviewed findings file and instructions for the design phase. | +| `/ux-audit report` | **Session 2**: After design work is done, generates the client-facing report with screenshots, video, and embeds. Offers to publish when complete. | + +That's it. Phases run automatically within each session — the user never needs to think about phase numbers. + +### Advanced: Individual Phase Commands + +For power users, CI, or re-running a specific step: + +```bash +/ux-audit scan # Phase 1 only: tech stack + codebase scan +/ux-audit tokens # Phase 2 only: design token mapping +/ux-audit audit # Phase 3 only: heuristic audit (10 sections) +/ux-audit review # Phase 4 only: interactive finding review +/ux-audit report # Phase 5 only: generate the client report +/ux-audit figma # Phase 6 only: Figma deliverables + token JSON +/ux-audit publish # Phase 7 only: deploy to Vercel/Netlify/Surge +``` + +### Shell Scripts + +```bash +# Interactive — prompts for everything +./scripts/run-audit-agent.sh + +# Non-interactive (CI/automation) +./scripts/run-audit-agent.sh ~/Development/my-app --client --reveal + +# Publish with guided setup (checks auth, installs CLI if needed) +./scripts/publish-report.sh +``` ### Audience Mode @@ -25,8 +115,35 @@ Every audit runs in one of two modes. If not specified, ask the user. The mode can also be set in `.ux-audit.json` via `"audience": "internal"` or `"audience": "client"`. +### Output Format + +Three output formats are available. Set via CLI flag or `"format"` in `.ux-audit.json`: + +| Format | Flag | Description | Best For | +|--------|------|-------------|----------| +| **reveal** | `--reveal` | Self-contained reveal.js slide deck (HTML + inlined CSS + CDN JS). Arrow keys, swipe, or click to navigate. | Web sharing, presentations, PDF export | +| **figma** | `--figma` | Direct write to Figma canvas via Plugin API. Uses the [Figma template](references/figma-workflow.md) as the visual structure. | Editable design deliverables, client collaboration | +| **html** | `--html` | Scrollable single-page HTML with companion CSS. | Internal reviews, print | + +- `reveal` and `figma` are only valid with `--client` mode +- Default format is `"reveal"` for client mode, `"html"` for internal mode +- `reveal` format supports PDF export via `?print-pdf` query parameter (see [reveal.js PDF docs](https://revealjs.com/pdf-export/)) +- `figma` format requires the Figma Desktop MCP server (`b20fbcc1`) connected +- All formats support web publishing via `./scripts/publish-report.sh` (Vercel, Netlify, or Surge) + See [references/tone-guide.md](references/tone-guide.md) for detailed language rules for each mode. See [references/team-guide.md](references/team-guide.md) for the complete client audit philosophy and deliverable structure. +See [references/figma-workflow.md](references/figma-workflow.md) for the Figma canvas workflow. + +### Required References + +Read these files before beginning any audit phase: + +- **[references/laws-of-ux.md](references/laws-of-ux.md)** — The 21 Laws of UX with review checklists, code examples, and review flags. Use as the authoritative source for all UX principle evaluations in Phase 3 and the UX Principles Assessment in Phase 4. +- [references/tone-guide.md](references/tone-guide.md) — Language rules for internal vs client mode +- [references/team-guide.md](references/team-guide.md) — Client audit philosophy and deliverable structure +- [references/severity-model.md](references/severity-model.md) — Finding classification (Critical/High/Medium/Pattern) +- [references/audit-checklist.md](references/audit-checklist.md) — Scan patterns for hardcoded values ### Additional Arguments @@ -38,18 +155,23 @@ Arguments after the phase name are passed as context. For example: Look for `.ux-audit.json` in the project root. If it does not exist, ask the user these questions and create it: -1. **Audience** — `"internal"` (developer team) or `"client"` (external stakeholder). This fundamentally changes the tone, labels, and framing of the entire report. -2. **Target design system** — default: `@rolemodel/optics`. Accept any CSS framework name. -3. **Brand primary color** — accept hex (#F7BD04), HSL (hsl(46, 97%, 49%)), or "use default" -4. **Brand font family** — default from project's existing CSS -5. **Figma file key** — or "create new" or "skip" -6. **Output directory** — default: `dev-tools/ux-audit-output` +1. **Audience** — `"internal"` (developer team) or `"client"` (external stakeholder). Default: `"client"`. +2. **Output format** — `"reveal"` (slide deck), `"figma"` (canvas write), or `"html"` (scrollable). Default: `"reveal"` for client, `"html"` for internal. +3. **Target design system** — default: `@rolemodel/optics`. Accept any CSS framework name. +4. **Brand primary color** — accept hex (#F7BD04), HSL (hsl(46, 97%, 49%)), or "use default" +5. **Brand font family** — default from project's existing CSS +6. **Portfolio URL** — URL to the company portfolio page (for cover image + client logo scraping). Example: `https://rolemodelsoftware.com/portfolio` +7. **Case study URL** — URL to the project's case study page (for narrative context). Optional. +8. **Figma file key** — target file, or "create new", or "skip" +9. **Publish provider** — `"vercel"` (default), `"netlify"`, or `"surge"` +10. **Output directory** — default: `dev-tools/ux-audit-output` Config schema: ```json { - "audience": "internal", + "audience": "client", + "format": "reveal", "designSystem": { "name": "optics", "package": "@rolemodel/optics", @@ -64,12 +186,18 @@ Config schema: "neutralHue": 226, "neutralSaturation": 5, "fontFamily": "DM Sans", - "caseStudyUrl": null + "caseStudyUrl": null, + "portfolioUrl": null }, "figma": { "fileKey": null, + "templateKey": "iyfRvWyTHSbNYtpBcjuvGg", "outputMode": "newFile" }, + "publish": { + "provider": "vercel", + "projectName": null + }, "outputDir": "dev-tools/ux-audit-output", "appUrl": "http://localhost:3000" } @@ -81,12 +209,92 @@ The `audience` field accepts `"internal"` or `"client"`. This controls: - Finding language and tone (direct vs diplomatic, "what we observed" + "what this means for users") - Executive summary framing (stats-driven vs narrative paragraph + callout) +The `format` field accepts `"reveal"` (default for client), `"figma"`, or `"html"`: +- **`"reveal"`** — Self-contained HTML slide deck. CSS inlined, reveal.js from CDN. Shareable via URL, PDF-exportable via `?print-pdf`. Only valid with `audience: "client"`. +- **`"figma"`** — Direct write to Figma canvas via the Plugin API. Uses the template at `figma.templateKey` as the visual structure. The agent duplicates the template, then populates it section-by-section. Only valid with `audience: "client"`. Requires Figma Desktop MCP. +- **`"html"`** — Scrollable single-page HTML with companion CSS file. Works for both audience modes. + +The `brand.portfolioUrl` field (e.g., `"https://rolemodelsoftware.com/portfolio"`) is used to **web-scrape the cover image and client logo**. The scraper looks for `data-framer-background-image-wrapper` elements on Framer-built portfolio pages to find the project card with the hero background and logo overlay. + +The `figma` section controls Figma output: +- `fileKey` — Target Figma file for output (or `null` for new file) +- `templateKey` — Figma template file key (default: `"iyfRvWyTHSbNYtpBcjuvGg"`) +- `outputMode` — `"newFile"` (duplicate template) or `"existingFile"` (write to fileKey) + +The `publish` section controls static deployment: +- `provider` — `"vercel"` (default), `"netlify"`, or `"surge"` +- `projectName` — override the deployed project name (default: `{brand.name}-assessment` slugified). This becomes the Vercel subdomain, e.g. `rapidair-assessment.vercel.app` — keep it clean and client-facing, no internal tool names. + When `designSystem.name` is `"optics"`, use the Optics MCP tools (`mcp__optics__*`) for token lookups, component mapping, and contrast checking. For any other design system, fall back to Grep/Read-based analysis. +### Figma MCP Usage Limits + +Figma MCP tool calls are **rate-limited by plan** — not billed per request, but hard-capped daily. Exceeding the limit locks you out for the rest of the day. Plan accordingly. + +| Plan + Seat Type | Daily Limit | Per-Minute | +|-----------------|-------------|------------| +| Enterprise (Full/Dev) | 600 calls/day | unlimited | +| Pro/Organization (Full/Dev) | 200 calls/day | 15–20/min | +| Starter or View/Collab | **6 calls/month** | — | + +**Estimated usage per audit phase:** + +| Operation | Estimated Calls | Notes | +|-----------|----------------|-------| +| Read design context (`get_design_context`) | 3–8 | Depends on number of screens inspected | +| Get screenshots (`get_screenshot`) | 3–10 | One per finding with a mockup | +| Get metadata (`get_metadata`) | 1–3 | Structure overview | +| Search design system (`search_design_system`) | 2–5 | Component/variable discovery | +| Write to canvas (`use_figma`) | 10–30 | Section-by-section population | +| **Full audit (read + write)** | **~20–55 calls** | ~25% of Pro daily limit | +| **Read-only audit (no Figma output)** | **~8–15 calls** | Screenshots + context only | + +**Exempt from rate limits:** `generate_figma_design` (HTML capture), `add_code_connect_map`, `whoami`. + +**Before running Figma phases**, the agent should: +1. Estimate the number of calls needed based on finding count +2. Warn the user: *"This audit will use approximately N of your 200 daily Figma MCP calls. Proceed?"* +3. If the user is on a Starter/View plan (6/month), warn strongly and suggest using the reveal.js HTML format instead + +**To minimize usage:** +- Use `format: "reveal"` (HTML) for the report — zero Figma write calls +- Pull screenshots in batch (`get_screenshot` for multiple nodes in sequence) +- Only use `format: "figma"` when the client specifically needs an editable Figma deliverable + ## Phase 1: Tech Stack Detection + Codebase Scan **Goal**: Identify the project's tech stack and scan for all hardcoded values. +### Pre-flight: Locate the Project + +If the current working directory doesn't look like a project (no `Gemfile`, `package.json`, or source files), ask the user: + +*"I don't see a project here. What's the project name?"* + +Then attempt to find it: + +1. **Search locally** — check common paths: + ```bash + ls -d ~/Development/{name} ~/projects/{name} ~/code/{name} 2>/dev/null + ``` + +2. **Search GitHub** — if not found locally, search the org: + ```bash + gh repo list RoleModel --limit 100 --json name,url | jq '.[] | select(.name | test("name"; "i"))' + ``` + If found, offer to clone it: + ``` + Found "RoleModel/{name}" on GitHub. Clone it to ~/Development/{name}? (yes/no) + ``` + Clone with: + ```bash + gh repo clone RoleModel/{name} ~/Development/{name} + ``` + +3. **If nothing found** — ask the user for the full path or repo URL. + +Once the project directory is confirmed, `cd` into it and proceed. + ### Steps 1. **Detect tech stack** by checking for: @@ -114,16 +322,37 @@ When `designSystem.name` is `"optics"`, use the Optics MCP tools (`mcp__optics__ - SCSS variables: `\$[a-z]` declarations - Count which files USE tokens vs hardcode values -5. **Fetch case study context + hero image** (if `brand.caseStudyUrl` is set in config): +5. **Fetch cover image, client logo, and case study context**: + + Image sourcing uses **two URLs** from config — `brand.portfolioUrl` for the card images and `brand.caseStudyUrl` for narrative context. + + **Cover image + client logo** (from `brand.portfolioUrl`): + + The portfolio page (e.g., `https://rolemodelsoftware.com/portfolio`) contains project cards — each with a background image and overlaid client logo. These are inside `data-framer-background-image-wrapper` elements on Framer-built sites. + + ```bash + # Scrape portfolio page for project card images + # The page may lazy-load cards — use "Load More" button or fetch the full DOM + curl -sL "{portfolioUrl}" | grep -oP 'data-framer-background-image-wrapper[^<]*]+src="[^"]+"' | grep -oP 'src="\K[^"]+' + ``` + + To find the **correct project card**: + 1. Search the page for the project name (case-insensitive) — it may be in an `alt` attribute, nearby text, or link href + 2. The card's `data-framer-background-image-wrapper` `` gives you the **hero background image** + 3. Look for a second image inside the same card container — this is typically the **client logo** (often an SVG or white-on-dark logo) + 4. If the portfolio uses pagination ("Load More"), the project may not be in the initial HTML — note this for the user + + Store as: + - `HERO_IMAGE_URL` → `{{HERO_IMAGE_URL}}` template placeholder (cover slide background) + - `CLIENT_LOGO_URL` → `{{CLIENT_LOGO_URL}}` template placeholder (cover slide logo) - **Hero image** — used in the report header: + If nothing found or fetch fails, keep the cover on the dark token fallback (`--dark`). + + **Hero image from case study page** (fallback if portfolio scrape fails): ```bash - curl -s "{caseStudyUrl}" | grep -oE 'data-framer-background-image-wrapper[^>]*>.*?https://framerusercontent\.com/images/[A-Za-z0-9]+\.(webp|jpg|png)' | grep -oE 'https://framerusercontent\.com/images/[A-Za-z0-9]+\.(webp|jpg|png)' | head -5 + curl -sL "{caseStudyUrl}" | grep -oP 'data-framer-background-image-wrapper[^<]*]+src="\K[^"]+' | head -5 ``` - - **Prefer `data-framer-background-image-wrapper` elements** — on Framer-built sites, full-bleed hero/background images are in `
` containers. There may be multiple; pick the one that visually represents the project (usually product photography or a hero scene, not the company logo or abstract blur). - - If `data-framer-background-image-wrapper` yields nothing, fall back to the first `` with a framerusercontent src that is NOT inside a `pointer-events:none` or blur container - - Store as `HERO_IMAGE_URL` for the `{{HERO_IMAGE_URL}}` template placeholder - - If nothing found or fetch fails, leave the placeholder empty — CSS `background-image: url()` degrades to a plain dark header + Pick the image that visually represents the project (product photography or hero scene, not the company logo or abstract blur). **Case study narrative context** — used to write the report: Use `WebFetch` on the `caseStudyUrl` with this prompt: *"Extract: (1) the problem or business need the software solved, (2) key features or capabilities built, (3) any outcomes, metrics, or impact statements, (4) quotes or notable language used to describe the product. Return as structured bullet points."* @@ -200,6 +429,8 @@ Store all findings in memory for subsequent phases. **Goal**: Walk the product through the 10 UX sections from [references/day-1-audit-checklist.html](references/day-1-audit-checklist.html) and produce classified findings for each. Combine observable heuristic evaluation (via `appUrl` or provided screenshots) with static code analysis patterns. +Apply the 21 Laws of UX from [references/laws-of-ux.md](references/laws-of-ux.md) throughout this phase. Use the Review Checklist to identify which laws are relevant to each section, and cite specific laws when documenting findings. + Run all 10 sections. For each section, produce a list of findings classified by severity using [references/severity-model.md](references/severity-model.md). ### Section 1: First Impressions & Visual Coherence @@ -350,15 +581,153 @@ Total: N findings (C critical, H high, M medium, P patterns) Classify each finding using [references/severity-model.md](references/severity-model.md). -## Phase 4: Report Generation +## Phase 4: Interactive Review + +**Goal**: Walk through every finding with the team member running the audit. Confirm, reject, refine, and prioritize before any design work begins. This is the most important phase — it ensures the report is grounded in shared understanding, not just automated analysis. + +**This phase is interactive.** Do not skip it. Do not batch-approve findings. Present each one and wait for input. + +### How It Works + +For each finding from Phase 3, present it to the user and ask: + +1. **Present the finding** clearly: + ``` + ── Finding 3 of 17 ───────────────────────────── + Section: Navigation & Wayfinding + Severity: High + + "Active nav state is missing — users can't tell where they are." + + Evidence: No .active or aria-current class on nav links. + Files: app/views/layouts/_nav.html.slim:12-28 + ───────────────────────────────────────────────── + ``` + +2. **Ask for confirmation**: + - "Do you agree with this finding? (yes / no / modify)" + - If **no** → mark as rejected, ask why (store the reason), move on + - If **modify** → ask what should change (wording, severity, scope), update the finding + - If **yes** → proceed to prioritization + +3. **Ask for prioritization** (for confirmed findings): + - "Should this be in the client report? (yes / maybe / internal-only)" + - `yes` → included in the client deliverable + - `maybe` → flagged for discussion, not in v1 of the report + - `internal-only` → stays in internal notes, not shown to client + +4. **Ask for design direction** (for client-facing findings): + - "Any thoughts on the redesign direction? Or should we propose something?" + - Capture notes like "use a floating panel instead" or "they've mentioned wanting tabs" + - These notes inform the design work that happens between sessions + +5. **Ask for grouping preference**: + - "Which lens does this fit? (experience / visual / modernization / strategic)" + - Default to the auto-classified lens, but let the reviewer override + +### Review Output + +After all findings are reviewed, write `{outputDir}/reviewed-findings.json`: + +```json +{ + "reviewedAt": "2026-03-25T10:00:00Z", + "reviewer": "Dallas", + "totalFindings": 17, + "confirmed": 12, + "rejected": 3, + "modified": 2, + "findings": [ + { + "id": "nav-active-state", + "section": "Navigation & Wayfinding", + "severity": "high", + "title": "Active nav state is missing", + "description": "...", + "status": "confirmed", + "includeInReport": true, + "lens": "experience", + "designNotes": "Use aria-current with visible indicator", + "files": ["app/views/layouts/_nav.html.slim:12-28"] + } + ] +} +``` + +Also print a summary: + +``` +Phase 4 Complete: Interactive Review +Reviewed by: Dallas +Confirmed: 12 findings (10 for client report, 2 internal-only) +Rejected: 3 findings +Modified: 2 findings +Maybe/discuss: 2 findings + +Findings saved to: {outputDir}/reviewed-findings.json + +Next steps: + 1. Design work — create Figma mockups for the confirmed findings + 2. When designs are ready, run: /ux-audit report +``` + +### Session Break + +**This is where Session 1 ends.** The team now does the design work: +- Create Figma mockups for confirmed findings +- Build prototypes if needed +- Record a video walkthrough of the redesigns +- Set up an interactive demo deploy (optional) + +When designs are complete, start Session 2 with `/ux-audit report`. + +--- + +## Phase 5: Report Generation + +**Goal**: Generate the comprehensive client-facing report using confirmed findings from Phase 4 and completed design work. + +### Pre-flight: Gather Design Assets + +Before generating the report, ask the user about available design assets: + +1. **"Where are the Figma mockups?"** — Get the Figma file key and node IDs for redesign screens. Use `mcp__figma__get_screenshot(nodeId, fileKey)` to pull them. + +2. **"Is there a video walkthrough?"** — If yes, get the file path (e.g., `rapidair-demo.mp4`). This becomes a dedicated slide in the reveal deck. + +3. **"Is there a live demo URL?"** — If yes (e.g., `https://rapidair.vercel.app`), this becomes an interactive embed slide using `data-background-iframe`. + +4. **"Any screenshots to include?"** — Get paths to app screenshots (current state or redesigned). These go inline with each finding slide. + +5. **Read `{outputDir}/reviewed-findings.json`** — This is the source of truth for which findings to include. Only findings with `"includeInReport": true` go in the client report. Use each finding's `lens`, `designNotes`, and `severity` to inform the narrative. -**Goal**: Generate the comprehensive HTML audit report in the appropriate tone. +If `reviewed-findings.json` doesn't exist, warn the user: *"No reviewed findings found. Run `/ux-audit review` first to walk through findings with the team."* Offer to proceed with all Phase 3 findings as a fallback. ### Steps -1. **Select template** based on audience mode: - - Internal → [references/report-template.html](references/report-template.html) - - Client → [references/report-template-client.html](references/report-template-client.html) +1. **Select output path** based on audience mode and format: + + | Audience | Format | Template | Notes | + |----------|--------|----------|-------| + | Internal | html | [report-template.html](references/report-template.html) + [.css](references/report-template.css) | Scrollable, file paths + line numbers | + | Client | html | [report-template-client.html](references/report-template-client.html) + [.css](references/report-template-client.css) | Scrollable, Then/Now/Next | + | Client | reveal | [report-template-reveal.html](references/report-template-reveal.html) | Self-contained slide deck, no companion CSS | + | Client | figma | [Figma template](references/figma-workflow.md) (`iyfRvWyTHSbNYtpBcjuvGg`) | Direct canvas write via Plugin API | + + **CRITICAL — Do not write any CSS.** For HTML formats, the CSS files are complete and final. Copy the template HTML and its companion CSS file verbatim into the output directory. The only style override allowed is: + ```html + + ``` + + **Reveal format**: Single self-contained file — CSS inlined, reveal.js from CDN. Same `{{PLACEHOLDER}}` names as the client HTML template. Content organized into `
` slides. Supports PDF export via `?print-pdf` query parameter. + + **Figma format**: Load the `figma-use` skill, then use `use_figma` to write content section-by-section to a duplicate of the Figma template. See [references/figma-workflow.md](references/figma-workflow.md) for the complete workflow. The `figma-generate-design` skill handles discovering components and assembling screens. + + **Image sourcing** (all formats): + - **Cover image + client logo**: Web scraped from `brand.portfolioUrl` (Phase 1 step 5) + - **Current state screenshots**: Capture from `appUrl` via Playwright or browser automation + - **Redesign mockups from Figma**: Use `mcp__figma__get_screenshot(nodeId, fileKey)` to pull from the project's Figma design file + - **Embedding**: For reveal/HTML, use base64 data URIs for self-contained deployment, or external URLs for lighter files. For Figma format, images are inserted via the Plugin API 2. **Apply tone rules** from [references/tone-guide.md](references/tone-guide.md): - All finding titles, descriptions, and executive summary language must follow the active tone guide @@ -398,7 +767,7 @@ Classify each finding using [references/severity-model.md](references/severity-m - **Lens 4: Strategic Opportunities** — Where the product is constrained by old design decisions. What could it do that it doesn't today? This section should feel generative — a glimpse of v-next, not a punch list. Also include: - - **UX Principles Assessment** — 8-12 principles evaluated against actual behavior. Use plain-language names (NOT academic law names like "Jakob's Law"). Each gets status: `pass` (Aligned), `opportunity`, or `attention` (Needs Attention). + - **UX Principles Assessment** — 8-12 principles from [references/laws-of-ux.md](references/laws-of-ux.md) evaluated against actual behavior. Use plain-language names (NOT academic law names like "Jakob's Law"). Each gets status: `pass` (Aligned), `opportunity`, or `attention` (Needs Attention). - Typography and color palette visuals within the Visual & Brand Coherence lens **Color palette visual — Optics token structure** (when designSystem is "optics"): @@ -447,12 +816,23 @@ Classify each finding using [references/severity-model.md](references/severity-m Observation text format: `[ID] **Title.** Description. (file:line if applicable)` 5. Ensure `mkdir -p {outputDir}` exists -6. Write report to `{outputDir}/ux-audit-report.html` -7. Tell the user the file path and suggest opening in browser +6. **For html format**: Copy the companion CSS file into the output directory (e.g. `{outputDir}/report-template-client.css` or `{outputDir}/report-template.css`) — do not modify it + **For reveal format**: No companion CSS to copy — styles are inlined in the template +7. Write report to `{outputDir}/ux-audit-report.html` + - html format: include a `` pointing to the copied CSS file + - reveal format: the file is self-contained and ready to open directly +8. Tell the user the file path and suggest opening in browser +9. If `publish` is configured in `.ux-audit.json`, inform the user they can deploy with: + ``` + ./scripts/publish-report.sh + ``` + Or run `/ux-audit publish` to deploy the report to a public URL. -## Phase 5: Figma Deliverables +## Phase 6: Figma Deliverables -**Goal**: Generate DTCG token JSON and push audit report to Figma. +**Goal**: Generate DTCG token JSON and write the audit report to Figma's canvas. + +**⚠ Usage check**: Before starting, estimate Figma MCP calls needed (see [Figma MCP Usage Limits](#figma-mcp-usage-limits)) and confirm with the user. A full canvas write typically uses 15–35 calls. Token JSON generation uses zero Figma calls. ### Steps @@ -462,13 +842,25 @@ Classify each finding using [references/severity-model.md](references/severity-m - See [references/dtcg-format.md](references/dtcg-format.md) for format details - Tell user: "Import these via Figma > Local Variables > Import" -2. **Push report to Figma** using `mcp__figma__generate_figma_design`: - - Follow the workflow in [references/figma-workflow.md](references/figma-workflow.md) - - Call `generate_figma_design` without captureId to get the JS capture snippet +2. **Write report to Figma canvas** (if format is `"figma"` or explicitly requested): + + Follow the workflow in [references/figma-workflow.md](references/figma-workflow.md): + + a. **Load the `figma-use` skill** — MANDATORY before every `use_figma` call + b. **Duplicate the template** — Copy `figma.templateKey` (`iyfRvWyTHSbNYtpBcjuvGg`) to create a project-specific file + c. **Discover design system** — Use `search_design_system` to find components, variables, and styles + d. **Populate content section by section** — Use `use_figma` to: + - Update text nodes with audit findings (exec summary, Then items, Now sections, recommendations) + - Set color swatches to the project's actual palette + - Insert cover image and client logo (from web scraping) + - Insert section screenshots (from `get_screenshot` or browser capture) + - Update stat numbers and labels + e. **Export if needed** — Use `get_screenshot` for PNG exports, or link directly to the Figma file + + **Alternative (HTML-to-Figma capture)**: If direct canvas write is unavailable, fall back to `generate_figma_design` with the HTML report: - Start a local HTTP server to serve the report HTML - Guide user to open the URL with `#figmacapture&figmadelay=2000` - Poll with captureId once user confirms the capture toast appeared - - Return the Figma URL 3. **Report deliverables**: ``` @@ -478,8 +870,98 @@ Classify each finding using [references/severity-model.md](references/severity-m {outputDir}/dark.tokens.json (N tokens) Audit report: {outputDir}/ux-audit-report.html Figma: [URL or "skipped"] + + Next steps: + - Import token JSON via Figma > Local Variables > Import + - Publish report: ./scripts/publish-report.sh ``` +## Demo Recording (Optional Deliverable) + +For client-facing audits, an automated screen recording with narration can accompany the report. This uses Playwright for browser automation and ffmpeg for audio merging. + +See **[references/demo-recording-guide.md](references/demo-recording-guide.md)** for the full workflow, and **[scripts/record-demo-template.ts](scripts/record-demo-template.ts)** for a starter script. + +Quick summary: +1. Write a Playwright script that walks through the app, using `waitUntil()` timestamps synced to narration audio +2. Write narration text targeting the audience (client or internal) +3. Generate audio via TTS (Hedra, ElevenLabs, etc.) +4. Run `ffmpeg -af silencedetect` on the audio to find section boundaries +5. Update the script's `waitUntil()` targets to match, re-record +6. Merge video + audio: `ffmpeg -i video.webm -i audio.mp3 -c:v libx264 -c:a aac -shortest -y demo.mp4` + +## Phase 7: Publishing + +Deploy the generated report to a public URL for sharing with clients or stakeholders. The publish script handles everything — checking prerequisites, guiding auth setup, and deploying. + +### Invocation + +At the end of Phase 5 (report generation), the skill automatically asks: *"Ready to publish? I can deploy this to a shareable URL."* If yes, it runs the publish flow. You can also run it standalone: + +- `/ux-audit publish` +- `./scripts/publish-report.sh` + +### How it works + +1. Checks if the Vercel CLI is installed — installs it if not +2. Checks if authenticated — guides through login if not +3. Reads `.ux-audit.json` for `outputDir` and `brand.name` +4. Packages the report + assets (screenshots, video, SVGs) as a static site +5. Deploys and returns the public URL (e.g., `rapidair-assessment.vercel.app`) + +### First-Time Setup (handled automatically by the script) + +The publish script detects when setup is needed and walks the user through it: + +``` + ╔══════════════════════════════════════════╗ + ║ UX Audit Report Publisher ║ + ╚══════════════════════════════════════════╝ + + ⚠ Vercel CLI not found. Installing... + ✓ Installed. + + ⚠ Not authenticated with Vercel. + + To log in, you'll need your Vercel credentials. + These are in 1Password under "Vercel" (or ask your team lead). + + Run this command: + npx vercel login + + Then re-run this script. +``` + +### Configuration + +Set in `.ux-audit.json` under `"publish"`: + +```json +{ + "publish": { + "provider": "vercel", + "projectName": "rapidair-assessment" + } +} +``` + +- `provider` — `"vercel"` (default), `"netlify"`, or `"surge"` +- `projectName` — the URL slug (becomes `{slug}.vercel.app`). Default: `{brand.name}-assessment`. Keep it clean and client-facing — no internal tool names. + +### Requirements + +- Node.js installed (for `npx`) +- **Vercel** (default): The script checks auth automatically. Credentials are in **1Password** under "Vercel". Run `npx vercel login` to authenticate. +- **Netlify**: `npx netlify-cli login` or `NETLIFY_AUTH_TOKEN` env var +- **Surge**: `npx surge login` + +### Notes + +- The reveal format works especially well for publishing — it's a single self-contained HTML file. When the report uses local assets (screenshots, video), those are copied alongside `index.html` into the deploy directory automatically. +- Published reports are static sites — no server-side code, no database, no build step +- Each deployment creates a unique URL. Re-deploying the same project name updates the existing URL. +- The audits repo (`rolemodel-ux-audits/`) convention is one folder per client: `rapidair/index.html`, `clientname/index.html`, etc. Each folder can be published independently. + ## Strict Rules 1. **Never fabricate findings.** Every finding must reference a specific file path and line number or code pattern that you verified by reading the actual source code. If you cannot find evidence, do not report the finding. @@ -495,3 +977,11 @@ Classify each finding using [references/severity-model.md](references/severity-m 6. **Be specific about effort.** Migration time estimates should be based on actual file count, component count, and token mapping completeness — not generic guesses. 7. **Use the TodoWrite tool** to track progress through the phases. Mark each phase as completed when done. + +8. **Never create, modify, or inject CSS.** The report templates ship with companion `.css` files that define all styling. Copy the CSS file alongside the HTML into the output directory unchanged. Do not add ` + + +
+ +
+ +
+
+
+ RoleModel Software · February 2026 +
+ +

Opportunity Assessment

+
+
+ + +
+
+ +
+ RapidAir is a genuinely capable product with a strong technical + foundation — + the opportunity is to bring the experience up to the same level + as the engineering. +
+
+
+

+ The application delivers on its core promise: zero-friction + project creation, real-time pipe calculations, and contextual + drawing tools that guide users in the moment. These aren't + minor wins — they represent deliberate, user-centered + decisions that hold up today. +

+

+ What's evolved is the surrounding experience. User + expectations around onboarding, accessibility, visual + consistency, and workspace efficiency have moved since the + original build. The 15 findings in this assessment map + directly to those gaps — each with a clear, practical path + forward. +

+
+
+
+
4
+
+ Areas of improvement shown in this deck, each with a + proposed direction. +
+
+
+
3
+
+ Improvements built directly on Optics components — floating + panels, settings, and wizard overlay. +
+
+
+
22
+
+ Hardcoded color values to consolidate into Optics tokens + with WCAG-safe pairings built in. +
+
+
+
+
+
+ + +
+
+ +
Then.
+
+
+ Zero-friction entry +

+ Start designing without an account. Projects are saved locally + and the editor is immediately available — no signup wall, no + onboarding funnel. +

+
+
+ Reactive pipe calculator +

+ Recommended pipe size updates in real time as HP and CFM + values change, giving users immediate, contextual guidance + without a manual lookup step. +

+
+
+ Contextual tool help +

+ Instruction popups appear exactly when a drawing tool is + selected, reducing the learning curve for first-time users of + the design canvas. +

+
+
+ Thoughtful button system +

+ Five button variants with hover and disabled states. "Start + Over" uses a distinct destructive style — intentional visual + hierarchy from day one. +

+
+
+ Silent autosave +

+ The editor saves continuously in the background. Users never + lose work and never see a save button — the system just + handles it. +

+
+
+ Inline guidance +

+ A 6-step instruction panel and 5 help videos live inside the + app. Users can learn the tool without leaving their workspace + or opening docs. +

+
+
+
+
+ + +
+ +
+
+ +
Now.
+

+ Four areas where we've done the design work. Each improvement is + scoped, practical, and built on the Optics design system you + already use. +

+
+
+ +
+
+ +
01
+
+ A smarter toolbar that stays out of the way. +
+
+ Drawing tools live in a permanent left sidebar today — always + visible, always consuming space. The redesign moves them into a + floating, collapsible panel paired with a new top toolbar for + undo, redo, snap, and view controls. The canvas gets its space + back. +
+
✓ Designed
+
+
+ +
+
+ +
02
+
+ Settings when you need them — not always. +
+
+ Project and building settings are permanently visible in a right + sidebar, taking up a third of the screen. A dropdown panel that + opens on demand gives the same access without the constant + footprint — premium canvas space stays clear for the work that + matters. +
+
✓ Designed
+
+
+ +
+
+ +
03
+
+ First-run should feel like a welcome, not a wall. +
+
+ New users currently hit a legal disclaimer and an empty canvas + with no orientation. A guided walkthrough wizard introduces core + concepts step by step — draw a pipe, set building parameters, + generate a report — building confidence before complexity. The + wizard uses an Optics overlay panel so it layers naturally on + top of the existing canvas. +
+
+
+ +
+
+ +
04
+
Meet users where they are.
+
+ The application was designed for desktop and doesn't adapt to + smaller screens. Field technicians verifying installations + on-site currently can't reference their designs on a tablet or + phone. The floating panel architecture from the toolbar redesign + creates a natural foundation for a responsive layout — mobile + becomes an extension, not a rewrite. +
+
+
+
+ + +
+
+ +
Color System
+
+ Hardcoded color values consolidated into tokens that are tested + and accessible. +
+
+
+ Primary — Brand Gold (H:46, S:97%) +
+
+
+ +8 +
+
+ +5 +
+
+ +2 +
+
+ base +
+
+ −3 +
+
+ −7 +
+
+ −9 +
+
+
+
+
+ Neutral — Blue-Tinted Gray (H:226, S:5%) +
+
+
+ +8 +
+
+ +5 +
+
+ +2 +
+
+ base +
+
+ −5 +
+
+ −7 +
+
+ −9 +
+
+
+
+
+ Alerts — notice · danger · warning · info +
+
+
+ notice +
+
+ danger +
+
+ warning +
+
+ info +
+
+
+
+
+ + +
+
+ +

+ We evaluated the application against core UX principles. Each is + assessed based on what we observed in the actual application. +

+
+
+
Aligned
+
+ Familiar patterns reduce learning time +
+
+ Standard web patterns used throughout — top nav, sidebar + panels, form inputs. Users recognize this layout immediately. +
+
+
+
Aligned
+
+ Real-time feedback builds confidence +
+
+ Pipe calculator updates instantly as values change. Autosave + runs silently in the background. Users always know where they + stand. +
+
+
+
Aligned
+
+ Progressive disclosure keeps things manageable +
+
+ Help content appears contextually when tools are selected. The + instruction panel is available but not forced on experienced + users. +
+
+
+
Opportunity
+
+ Important actions should be easy to reverse +
+
+ "Start Over" clears the entire drawing with a single click and + a browser-native confirm dialog that's easy to dismiss without + reading. +
+
+
+
Opportunity
+
Less is more for focused tasks
+
+ The toolbar shows all 8 drawing tools simultaneously. + Contextual grouping — showing tools relevant to the current + task — would reduce cognitive load. +
+
+
+
Opportunity
+
+ First impressions shape long-term perception +
+
+ New users land on a legal disclaimer followed by an empty + canvas. No guided introduction, no sample project, no "start + here" prompt. +
+
+
+
Opportunity
+
+ Visual consistency signals quality +
+
+ 22 hardcoded color values create subtle inconsistencies across + the interface. Consolidating to Optics tokens would unify the + visual language. +
+
+
+
Needs Attention
+
+ Keyboard users should reach every feature +
+
+ Drawing tool buttons, the settings panel, and the help overlay + are unreachable via keyboard navigation. Tab order skips + interactive elements. +
+
+
+
Needs Attention
+
+ Touch targets need room to breathe +
+
+ Toolbar icons are 24×24px with no padding — well below the + 44×44px minimum for accessible touch targets on mobile and + tablet. +
+
+
+
+
+ + +
+
+ +
+ RapidAir is a capable product with a strong technical foundation. + These three improvements bring the experience up to the same + level. +
+
+
+
01
+
+ Rebuild the toolbar — floating panels and a streamlined tool + set declutter the canvas immediately +
+
+
+
02
+
+ Move settings into a dropdown — the right sidebar is premium + canvas space, not a settings panel +
+
+
+
03
+
+ Ship the walkthrough wizard with the next release — first-run + is the highest-leverage moment in the product +
+
+
+
04
+
+ Plan mobile as the next phase — the floating panel + work sets it up as a natural extension, not a rewrite +
+
+ +
+ +
+
+
+
+ + + + + diff --git a/skills/ux-audit/references/report-template-reveal.html b/skills/ux-audit/references/report-template-reveal.html new file mode 100644 index 0000000..1c1cedf --- /dev/null +++ b/skills/ux-audit/references/report-template-reveal.html @@ -0,0 +1,1054 @@ + + + + + + {{PROJECT_NAME}} — Opportunity Assessment + + + + + + + +
+ +
+ + +
+
+
+ RoleModel Software · {{DATE}} +
+ + +

Opportunity Assessment

+
+
+ + +
+
+ +
+ + +
+
+
+ + +
+
+ + +
+
+
+
+ + +
+
+ +
Then.
+
+ + +
+
+
+ + +
+ +
+
+ +
Now.
+

+ + +

+
+
+ + + +
+ + +
+
+ +
Color System
+
+ + +
+ + +
+
+ + +
+
+ +

+ We evaluated the application against core UX principles. Each is + assessed based on what we observed in the actual application. +

+
+ + +
+
+
+ + +
+
+ +
+ + +
+
+ + +
+ +
+
+ +
+
+ + + + + diff --git a/skills/ux-audit/references/report-template.css b/skills/ux-audit/references/report-template.css new file mode 100644 index 0000000..31f558a --- /dev/null +++ b/skills/ux-audit/references/report-template.css @@ -0,0 +1,508 @@ +* { margin: 0; padding: 0; box-sizing: border-box; } +body, .page, .exec-summary, .section { display: flex; flex-direction: column; } + +body { + font-family: 'DM Sans', -apple-system, sans-serif; + background: #FEFEFE; + color: #181A18; + font-size: 16px; + line-height: 1.5; +} + +.page { + display: flex; + flex-direction: column; + max-width: 1080px; + margin: 0 auto; + padding: 96px 145px 120px; +} + +/* ── HEADER ── */ +.eyebrow { + font-size: 11px; + font-weight: 700; + letter-spacing: 0.2em; + text-transform: uppercase; + color: #3A70B3; + display: flex; + margin-bottom: 12px; +} + +.doc-title { + font-size: 44px; + font-weight: 700; + letter-spacing: -0.03em; + line-height: 1.2; + color: #181A18; + padding-bottom: 24px; + border-bottom: 1px solid #E0E0E0; + margin-bottom: 32px; + display: flex; +} + +/* ── METADATA ── */ +.meta-fields { + display: flex; + flex-direction: column; + gap: 6px; + margin-bottom: 40px; + font-size: 12px; + color: #515151; +} + +.meta-fields p strong { + font-weight: 700; +} + +/* ── CALLOUT ── */ +.callout { + background: #F5FBFF; + border: 1px solid #0873C4; + border-radius: 8px; + padding: 24px; + margin-bottom: 56px; + display: flex; + flex-direction: column; + gap: 10px; +} + +.callout-title { + font-size: 20px; + font-weight: 700; + letter-spacing: -0.02em; + color: #181A18; + display: flex; +} + +.callout p { + font-size: 12px; + color: #515151; + line-height: 1.6; + display: flex; +} + +.severity-row { + display: flex; + flex-direction: row; + align-items: center; + gap: 16px; + flex-wrap: wrap; +} + +.severity-item { + display: flex; + flex-direction: row; + align-items: center; + gap: 8px; + font-size: 12px; + color: #515151; +} + +.severity-dot { + width: 16px; + height: 16px; + border-radius: 50%; + flex-shrink: 0; +} + +.dot-green { background: #4CAF50; } +.dot-yellow { background: #FFC107; } +.dot-red { background: #F44336; } + +.callout-note { + font-size: 12px; + color: #515151; + line-height: 1.6; + display: flex; +} + +/* ── EXECUTIVE SUMMARY ── */ +.exec-summary { + background: #F9F9F9; + border: 1px solid #E0E0E0; + border-radius: 8px; + padding: 24px; + margin-bottom: 56px; + display: flex; + flex-direction: column; +} + +.exec-title { + font-size: 20px; + font-weight: 700; + letter-spacing: -0.02em; + color: #181A18; + margin-bottom: 16px; + display: flex; +} + +.stat-row { + display: flex; + flex-direction: row; + gap: 32px; + flex-wrap: wrap; + margin-bottom: 20px; +} + +.stat-box { + display: flex; + flex-direction: column; + gap: 2px; +} + +.stat-num { + font-size: 36px; + font-weight: 800; + color: #3A70B3; + letter-spacing: -0.03em; + line-height: 1; + display: flex; +} + +.stat-label { + font-size: 11px; + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.1em; + color: #757575; + display: inline-flex; +} + +.exec-verdict { + font-size: 13px; + color: #181A18; + line-height: 1.7; + padding: 12px 16px; + background: rgba(58, 112, 179, 0.06); + border-left: 3px solid #3A70B3; + border-radius: 0 6px 6px 0; + display: flex; +} + +/* ── SECTION ── */ +.section { + margin-bottom: 56px; + display: flex; +} + +.section-header { + border-bottom: 1px solid #E0E0E0; + padding-bottom: 20px; + margin-bottom: 12px; + display: flex; +} + +.section-title { + font-size: 28px; + font-weight: 700; + letter-spacing: -0.03em; + color: #181A18; + display: flex; +} + +.section-intro { + font-size: 17px; + line-height: 1.8; + color: #545454; + margin-bottom: 20px; + display: flex; +} + +/* ── CHECKLIST ITEMS ── */ +.checklist { + display: flex; + flex-direction: column; + gap: 0; + margin-bottom: 24px; +} + +.check-item { + display: flex; + flex-direction: column; + gap: 0; + padding: 10px 0; +} + +.check-row { + display: flex; + flex-direction: row; + align-items: flex-start; + gap: 12px; +} + +.check-row input[type="checkbox"] { + width: 16px; + height: 16px; + flex-shrink: 0; + margin-top: 3px; + border: 1px solid #757575; + border-radius: 4px; + appearance: none; + -webkit-appearance: none; + background: #FFFFFF; + cursor: pointer; + position: relative; +} + +.check-row input[type="checkbox"]:checked { + background: #3A70B3; + border-color: #3A70B3; +} + +.check-row input[type="checkbox"]:checked::after { + content: ''; + position: absolute; + left: 4px; + top: 1px; + width: 5px; + height: 9px; + border: 2px solid white; + border-top: none; + border-left: none; + transform: rotate(45deg); +} + +.check-label { + font-size: 16px; + color: #1E1E1E; + line-height: 1.4; + flex: 1; +} + +.check-description { + display: flex; + flex-direction: row; + gap: 12px; + padding-top: 4px; +} + +.check-description .spacer { + width: 16px; + flex-shrink: 0; +} + +.check-description p { + font-size: 14px; + color: #757575; + line-height: 1.5; + flex: 1; + display: flex; +} + +/* ── OBSERVATIONS ── */ +.observations-label { + font-size: 17px; + font-weight: 700; + color: #545454; + margin-top: 8px; + margin-bottom: 8px; + display: flex; +} + +.observations-area { + border: 1px solid #E0E0E0; + border-radius: 6px; + min-height: 72px; + padding: 12px 16px; + font-size: 14px; + color: #181A18; + line-height: 1.6; + display: flex; +} + +/* ── TASKS BLOCK ── */ +.tasks-block { + margin: 16px 0; + padding: 16px 20px; + background: #F9F9F9; + border-radius: 6px; + font-size: 15px; + color: #181A18; + line-height: 2; + display: flex; +} + +.tasks-block p { font-weight: 700; margin-bottom: 4px; } + +/* ── TOKEN MAPPING TABLE ── */ +.token-table { + width: 100%; + border-collapse: collapse; + font-size: 13px; + margin-top: 12px; + margin-bottom: 24px; +} + +.token-table th { + background: #F5F5F5; + font-weight: 700; + text-align: left; + padding: 10px 14px; + border: 1px solid #E0E0E0; + font-size: 11px; + letter-spacing: 0.06em; + text-transform: uppercase; + color: #545454; +} + +.token-table td { + padding: 10px 14px; + border: 1px solid #E0E0E0; + color: #181A18; + vertical-align: top; +} + +.token-table tr:nth-child(even) td { background: #FAFAFA; } + +.swatch { + display: inline-block; + width: 14px; + height: 14px; + border-radius: 3px; + border: 1px solid rgba(0,0,0,0.1); + vertical-align: middle; + margin-right: 6px; +} + +.token-table code { + background: #F5F5F5; + padding: 2px 6px; + border-radius: 3px; + font-size: 11px; + font-family: 'SF Mono', 'Fira Code', monospace; +} + +.match { color: #2E7D32; font-weight: 600; font-size: 11px; } +.close { color: #E65100; font-weight: 600; font-size: 11px; } +.miss { color: #C62828; font-weight: 600; font-size: 11px; } + +/* ── HARDCODED VALUES BAR CHART ── */ +.bar-row { + display: flex; + flex-direction: row; + align-items: center; + gap: 12px; + padding: 8px 0; + border-bottom: 1px solid #F0F0F0; +} + +.bar-label { + font-size: 12px; + font-weight: 600; + min-width: 240px; + font-family: 'SF Mono', 'Fira Code', monospace; + color: #181A18; +} + +.bar-track { + flex: 1; + height: 18px; + background: #F0F0F0; + border-radius: 4px; + overflow: hidden; +} + +.bar-fill { height: 100%; border-radius: 4px; } +.bar-fill.high { background: linear-gradient(90deg, #F44336, #C62828); } +.bar-fill.medium { background: linear-gradient(90deg, #FFA726, #E65100); } +.bar-fill.low { background: linear-gradient(90deg, #66BB6A, #2E7D32); } + +.bar-count { + font-size: 12px; + font-weight: 700; + min-width: 30px; + text-align: right; + color: #545454; +} + +/* ── FINDINGS TABLE ── */ +.findings-table { + width: 100%; + border-collapse: collapse; + font-size: 14px; + margin-top: 16px; +} + +.findings-table th { + background: #F5F5F5; + font-weight: 700; + text-align: left; + padding: 10px 14px; + border: 1px solid #E0E0E0; + font-size: 12px; + letter-spacing: 0.06em; + text-transform: uppercase; + color: #545454; +} + +.findings-table td { + padding: 12px 14px; + border: 1px solid #E0E0E0; + color: #181A18; + height: 44px; +} + +.findings-table tr:nth-child(even) td { background: #FAFAFA; } + +.severity-badge { + font-size: 10px; + font-weight: 700; + padding: 2px 8px; + border-radius: 10px; + color: white; + display: inline-block; +} + +.badge-critical { background: #C62828; } +.badge-high { background: #E65100; } +.badge-medium { background: #1565C0; } +.badge-pattern { background: #6A1B9A; } + +/* ── COMPONENT MAPPING ── */ +.comp-grid { + display: flex; + flex-direction: row; + flex-wrap: wrap; + gap: 10px; + margin-top: 12px; +} + +.comp-card { + flex: 0 0 calc(50% - 5px); + background: white; + border: 1px solid #E0E0E0; + border-radius: 8px; + padding: 14px 16px; + display: flex; + flex-direction: row; + align-items: center; + gap: 10px; +} + +.comp-from { font-size: 12px; font-weight: 600; color: #C62828; min-width: 140px; font-family: 'SF Mono', monospace; } +.comp-arrow { color: #3A70B3; font-size: 16px; flex-shrink: 0; } +.comp-to { font-size: 13px; font-weight: 600; color: #2E7D32; } +.comp-tokens { font-size: 11px; color: #888; } + +/* ── FOOTER ── */ +.doc-footer { + margin-top: 80px; + padding-top: 24px; + border-top: 1px solid #E0E0E0; + font-size: 12px; + color: #999; + line-height: 1.8; +} + +hr { + border: none; + border-top: 1px solid #E0E0E0; + margin: 0 0 56px 0; +} + +h3.table-label { + font-size: 17px; + font-weight: 700; + color: #181A18; + margin: 24px 0 8px; +} diff --git a/skills/ux-audit/scripts/publish-report.sh b/skills/ux-audit/scripts/publish-report.sh new file mode 100755 index 0000000..d1316b7 --- /dev/null +++ b/skills/ux-audit/scripts/publish-report.sh @@ -0,0 +1,238 @@ +#!/usr/bin/env bash +# +# publish-report.sh +# +# Deploys a UX audit report to a public URL as a static site. +# +# Usage: +# ./publish-report.sh # Interactive mode +# ./publish-report.sh [output-dir] [--provider vercel|netlify|surge] [--name project-name] +# +# Examples: +# ./publish-report.sh +# ./publish-report.sh dev-tools/ux-audit-output +# ./publish-report.sh dev-tools/ux-audit-output --provider netlify +# ./publish-report.sh dev-tools/ux-audit-output --provider vercel --name my-app-ux-audit +# +# Prerequisites: +# - Node.js installed (for npx) +# - For Vercel: authenticated via `npx vercel login` or VERCEL_TOKEN env var +# - For Netlify: authenticated via `npx netlify-cli login` or NETLIFY_AUTH_TOKEN env var +# - For Surge: authenticated via `npx surge login` +# + +set -euo pipefail + +# --- Colors --- +BLUE='\033[0;34m' +GREEN='\033[0;32m' +YELLOW='\033[0;33m' +CYAN='\033[0;36m' +BOLD='\033[1m' +DIM='\033[2m' +RESET='\033[0m' + +prompt() { + local message="$1" + local default="$2" + local varname="$3" + printf "${BLUE}${BOLD} %s${RESET} ${GREEN}[%s]${RESET}${BLUE}: ${RESET}" "$message" "$default" + read -r input + eval "$varname=\"${input:-$default}\"" +} + +prompt_choice() { + local message="$1" + local options="$2" + local default="$3" + local varname="$4" + printf "${BLUE}${BOLD} %s${RESET} ${DIM}(%s)${RESET} ${GREEN}[%s]${RESET}${BLUE}: ${RESET}" "$message" "$options" "$default" + read -r input + eval "$varname=\"${input:-$default}\"" +} + +# --- Header --- +echo "" +printf "${CYAN}${BOLD} ╔══════════════════════════════════════════╗${RESET}\n" +printf "${CYAN}${BOLD} ║ UX Audit Report Publisher ║${RESET}\n" +printf "${CYAN}${BOLD} ║ Deploy your report to a public URL ║${RESET}\n" +printf "${CYAN}${BOLD} ╚══════════════════════════════════════════╝${RESET}\n" +echo "" + +# --- Auto-detect defaults from .ux-audit.json --- +AUTO_OUTPUT_DIR="dev-tools/ux-audit-output" +AUTO_PROJECT_NAME="assessment" + +if [ -f .ux-audit.json ]; then + DETECTED_DIR=$(python3 -c "import json; print(json.load(open('.ux-audit.json')).get('outputDir', ''))" 2>/dev/null || echo "") + if [ -n "$DETECTED_DIR" ]; then + AUTO_OUTPUT_DIR="$DETECTED_DIR" + fi + + DETECTED_BRAND=$(python3 -c "import json; print(json.load(open('.ux-audit.json')).get('brand', {}).get('name', ''))" 2>/dev/null || echo "") + DETECTED_PUBLISH_NAME=$(python3 -c "import json; print(json.load(open('.ux-audit.json')).get('publish', {}).get('projectName', ''))" 2>/dev/null || echo "") + if [ -n "$DETECTED_PUBLISH_NAME" ]; then + AUTO_PROJECT_NAME=$(echo "$DETECTED_PUBLISH_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/[^a-z0-9-]//g') + elif [ -n "$DETECTED_BRAND" ]; then + # Default: "{brand}-assessment" — clean, client-facing URL + AUTO_PROJECT_NAME=$(echo "$DETECTED_BRAND-assessment" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/[^a-z0-9-]//g') + fi +fi + +# --- Interactive or headless --- +if [ $# -eq 0 ]; then + # Interactive mode + printf "${DIM} Answer the prompts below. Press Enter to accept defaults.${RESET}\n" + + if [ -f .ux-audit.json ]; then + printf "${DIM} (Detected .ux-audit.json - using auto-detected values)${RESET}\n" + fi + echo "" + + prompt "Output directory" "$AUTO_OUTPUT_DIR" OUTPUT_DIR + prompt_choice "Deploy provider" "vercel, netlify, surge" "vercel" PROVIDER + prompt "URL slug (becomes {slug}.vercel.app)" "$AUTO_PROJECT_NAME" PROJECT_NAME +else + # Headless mode - parse arguments + OUTPUT_DIR="" + PROVIDER="vercel" + PROJECT_NAME="" + + while [[ $# -gt 0 ]]; do + case "$1" in + --provider) + PROVIDER="$2" + shift 2 + ;; + --provider=*) + PROVIDER="${1#*=}" + shift + ;; + --name) + PROJECT_NAME="$2" + shift 2 + ;; + --name=*) + PROJECT_NAME="${1#*=}" + shift + ;; + -*) + printf "${YELLOW} Unknown option: %s${RESET}\n" "$1" >&2 + exit 1 + ;; + *) + OUTPUT_DIR="$1" + shift + ;; + esac + done + + # Apply auto-detected defaults for unset values + if [ -z "$OUTPUT_DIR" ]; then + OUTPUT_DIR="$AUTO_OUTPUT_DIR" + fi + if [ -z "$PROJECT_NAME" ]; then + PROJECT_NAME="$AUTO_PROJECT_NAME" + fi +fi + +# --- Validate report exists --- +REPORT_FILE="$OUTPUT_DIR/ux-audit-report.html" +# Also check for index.html (the audits repo convention) +if [ ! -f "$REPORT_FILE" ] && [ -f "$OUTPUT_DIR/index.html" ]; then + REPORT_FILE="$OUTPUT_DIR/index.html" +fi +if [ ! -f "$REPORT_FILE" ]; then + echo "" + printf "${YELLOW} ✗ Report not found at %s${RESET}\n" "$OUTPUT_DIR" >&2 + printf "${YELLOW} Run /ux-audit report first to generate it.${RESET}\n" >&2 + exit 1 +fi + +# --- Check prerequisites --- +check_vercel_auth() { + printf "${DIM} Checking Vercel setup...${RESET}\n" + + # Check if Vercel CLI is available + if ! command -v vercel &>/dev/null && ! npx -y vercel --version &>/dev/null 2>&1; then + printf "${YELLOW} ⚠ Vercel CLI not found. Installing...${RESET}\n" + npm install -g vercel 2>/dev/null || npx -y vercel --version >/dev/null 2>&1 + printf "${GREEN} ✓ Installed.${RESET}\n" + fi + + # Check if authenticated + if ! npx -y vercel whoami &>/dev/null 2>&1; then + echo "" + printf "${YELLOW} ⚠ Not authenticated with Vercel.${RESET}\n" + echo "" + printf "${BOLD} To log in, you'll need your Vercel credentials.${RESET}\n" + printf "${DIM} These are in 1Password — search for \"Vercel\".${RESET}\n" + printf "${DIM} (Or ask your team lead for access.)${RESET}\n" + echo "" + printf "${BOLD} Run this command, then re-run the publish script:${RESET}\n" + echo "" + printf "${CYAN} npx vercel login${RESET}\n" + echo "" + exit 1 + fi + + VERCEL_USER=$(npx -y vercel whoami 2>/dev/null) + printf "${GREEN} ✓ Authenticated as ${BOLD}%s${RESET}\n" "$VERCEL_USER" +} + +if [ "$PROVIDER" = "vercel" ]; then + check_vercel_auth +fi +echo "" + +# --- Summary --- +printf "${CYAN}${BOLD} ── Deploy Configuration ───────────────────${RESET}\n" +printf "${BOLD} Report: ${RESET}%s\n" "$REPORT_FILE" +printf "${BOLD} Provider: ${RESET}%s\n" "$PROVIDER" +printf "${BOLD} URL slug: ${RESET}%s.vercel.app\n" "$PROJECT_NAME" +printf "${CYAN} ─────────────────────────────────────────────${RESET}\n" +echo "" + +# --- Create deploy directory --- +DEPLOY_DIR=$(mktemp -d) +trap 'rm -rf "$DEPLOY_DIR"' EXIT + +# Copy report as index.html +cp "$REPORT_FILE" "$DEPLOY_DIR/index.html" + +# Copy all assets alongside the report (images, video, CSS, SVGs) +for ASSET in "$OUTPUT_DIR"/*.{png,jpg,jpeg,webp,mp4,webm,svg,css}; do + [ -f "$ASSET" ] && cp "$ASSET" "$DEPLOY_DIR/" +done + +# --- Deploy --- +case "$PROVIDER" in + vercel) + printf "${BOLD} Deploying to Vercel...${RESET}\n" + VERCEL_OUTPUT=$(npx -y vercel deploy "$DEPLOY_DIR" --prod --yes 2>&1) + VERCEL_URL=$(echo "$VERCEL_OUTPUT" | grep -oE 'https://[^ ]+\.vercel\.app' | tail -1) + if [ -n "$VERCEL_URL" ]; then + echo "" + printf "${GREEN}${BOLD} ✓ Live at: ${RESET}${BOLD}${VERCEL_URL}${RESET}\n" + else + echo "$VERCEL_OUTPUT" + printf "${YELLOW} ⚠ Deploy may have failed. Check output above.${RESET}\n" + fi + ;; + netlify) + printf "${BOLD} Deploying to Netlify...${RESET}\n" + npx -y netlify-cli deploy --prod --dir "$DEPLOY_DIR" --site "$PROJECT_NAME" + ;; + surge) + printf "${BOLD} Deploying to Surge...${RESET}\n" + npx -y surge "$DEPLOY_DIR" "${PROJECT_NAME}.surge.sh" + ;; + *) + printf "${YELLOW} ✗ Unknown provider '%s'. Use: vercel, netlify, or surge${RESET}\n" "$PROVIDER" >&2 + exit 1 + ;; +esac + +echo "" +printf "${GREEN}${BOLD} Done.${RESET}\n" +echo "" diff --git a/skills/ux-audit/scripts/record-demo-template.ts b/skills/ux-audit/scripts/record-demo-template.ts new file mode 100644 index 0000000..f2c1af5 --- /dev/null +++ b/skills/ux-audit/scripts/record-demo-template.ts @@ -0,0 +1,96 @@ +/** + * Automated Demo Recording Template + * + * Adapt this template for each project. Replace the section actions + * with interactions specific to the app being demoed. + * + * Usage: + * 1. npm i -D @playwright/test tsx + * 2. npx playwright install chromium + * 3. Start the dev server + * 4. npx tsx scripts/record-demo.ts + * + * After recording, merge with narration audio: + * ffmpeg -i recordings/