diff --git a/Skills/analyze-project/SKILL.md b/Skills/analyze-project/SKILL.md new file mode 100644 index 0000000..20e8dae --- /dev/null +++ b/Skills/analyze-project/SKILL.md @@ -0,0 +1,432 @@ +--- +name: analyze-project +description: Forensic root cause analyzer for Antigravity sessions. Classifies scope deltas, rework patterns, root causes, hotspots, and auto-improves prompts/health. +version: "1.0" +tags: [analysis, diagnostics, meta, root-cause, project-health, session-review] +--- + +# /analyze-project — Root Cause Analyst Workflow + +Analyze AI-assisted coding sessions in `~/.gemini/antigravity/brain/` and produce a report that explains not just **what happened**, but **why it happened**, **who/what caused it**, and **what should change next time**. + +## Goal + +For each session, determine: + +1. What changed from the initial ask to the final executed work +2. Whether the main cause was: + - user/spec + - agent + - repo/codebase + - validation/testing + - legitimate task complexity +3. Whether the opening prompt was sufficient +4. Which files/subsystems repeatedly correlate with struggle +5. What changes would most improve future sessions + +## Global Rules + +- Treat `.resolved.N` counts as **iteration signals**, not proof of failure +- Separate **human-added scope**, **necessary discovered scope**, and **agent-introduced scope** +- Separate **agent error** from **repo friction** +- Every diagnosis must include **evidence** and **confidence** +- Confidence levels: + - **High** = direct artifact/timestamp evidence + - **Medium** = multiple supporting signals + - **Low** = plausible inference, not directly proven +- Evidence precedence: + - artifact contents > timestamps > metadata summaries > inference +- If evidence is weak, say so + +--- + +## Step 0.5: Session Intent Classification + +Classify the primary session intent from objective + artifacts: + +- `DELIVERY` +- `DEBUGGING` +- `REFACTOR` +- `RESEARCH` +- `EXPLORATION` +- `AUDIT_ANALYSIS` + +Record: +- `session_intent` +- `session_intent_confidence` + +Use intent to contextualize severity and rework shape. +Do not judge exploratory or research sessions by the same standards as narrow delivery sessions. + +--- + +## Step 1: Discover Conversations + +1. Read available conversation summaries from system context +2. List conversation folders in the user’s Antigravity `brain/` directory +3. Build a conversation index with: + - `conversation_id` + - `title` + - `objective` + - `created` + - `last_modified` +4. If the user supplied a keyword/path, filter to matching conversations; otherwise analyze all + +Output: indexed list of conversations to analyze. + +--- + +## Step 2: Extract Session Evidence + +For each conversation, read if present: + +### Core artifacts +- `task.md` +- `implementation_plan.md` +- `walkthrough.md` + +### Metadata +- `*.metadata.json` + +### Version snapshots +- `task.md.resolved.0 ... N` +- `implementation_plan.md.resolved.0 ... N` +- `walkthrough.md.resolved.0 ... N` + +### Additional signals +- other `.md` artifacts +- timestamps across artifact updates +- file/folder/subsystem names mentioned in plans/walkthroughs +- validation/testing language +- explicit acceptance criteria, constraints, non-goals, and file targets + +Record per conversation: + +#### Lifecycle +- `has_task` +- `has_plan` +- `has_walkthrough` +- `is_completed` +- `is_abandoned_candidate` = task exists but no walkthrough + +#### Revision / change volume +- `task_versions` +- `plan_versions` +- `walkthrough_versions` +- `extra_artifacts` + +#### Scope +- `task_items_initial` +- `task_items_final` +- `task_completed_pct` +- `scope_delta_raw` +- `scope_creep_pct_raw` + +#### Timing +- `created_at` +- `completed_at` +- `duration_minutes` + +#### Content / quality +- `objective_text` +- `initial_plan_summary` +- `final_plan_summary` +- `initial_task_excerpt` +- `final_task_excerpt` +- `walkthrough_summary` +- `mentioned_files_or_subsystems` +- `validation_requirements_present` +- `acceptance_criteria_present` +- `non_goals_present` +- `scope_boundaries_present` +- `file_targets_present` +- `constraints_present` + +--- + +## Step 3: Prompt Sufficiency + +Score the opening request on a 0–2 scale for: + +- **Clarity** +- **Boundedness** +- **Testability** +- **Architectural specificity** +- **Constraint awareness** +- **Dependency awareness** + +Create: +- `prompt_sufficiency_score` +- `prompt_sufficiency_band` = High / Medium / Low + +Then note which missing prompt ingredients likely contributed to later friction. + +Do not punish short prompts by default; a narrow, obvious task can still have high sufficiency. + +--- + +## Step 4: Scope Change Classification + +Classify scope change into: + +- **Human-added scope** — new asks beyond the original task +- **Necessary discovered scope** — work required to complete the original task correctly +- **Agent-introduced scope** — likely unnecessary work introduced by the agent + +Record: +- `scope_change_type_primary` +- `scope_change_type_secondary` (optional) +- `scope_change_confidence` +- evidence + +Keep one short example in mind for calibration: +- Human-added: “also refactor nearby code while you’re here” +- Necessary discovered: hidden dependency must be fixed for original task to work +- Agent-introduced: extra cleanup or redesign not requested and not required + +--- + +## Step 5: Rework Shape + +Classify each session into one primary pattern: + +- **Clean execution** +- **Early replan then stable finish** +- **Progressive scope expansion** +- **Reopen/reclose churn** +- **Late-stage verification churn** +- **Abandoned mid-flight** +- **Exploratory / research session** + +Record: +- `rework_shape` +- `rework_shape_confidence` +- evidence + +--- + +## Step 6: Root Cause Analysis + +For every non-clean session, assign: + +### Primary root cause +One of: +- `SPEC_AMBIGUITY` +- `HUMAN_SCOPE_CHANGE` +- `REPO_FRAGILITY` +- `AGENT_ARCHITECTURAL_ERROR` +- `VERIFICATION_CHURN` +- `LEGITIMATE_TASK_COMPLEXITY` + +### Secondary root cause +Optional if materially relevant + +### Root-cause guidance +- **SPEC_AMBIGUITY**: opening ask lacked boundaries, targets, criteria, or constraints +- **HUMAN_SCOPE_CHANGE**: scope expanded because the user broadened the task +- **REPO_FRAGILITY**: hidden coupling, brittle files, unclear architecture, or environment issues forced extra work +- **AGENT_ARCHITECTURAL_ERROR**: wrong files, wrong assumptions, wrong approach, hallucinated structure +- **VERIFICATION_CHURN**: implementation mostly worked, but testing/validation caused loops +- **LEGITIMATE_TASK_COMPLEXITY**: revisions were expected for the difficulty and not clearly avoidable + +Every root-cause assignment must include: +- evidence +- why stronger alternative causes were rejected +- confidence + +--- + +## Step 6.5: Session Severity Scoring (0–100) + +Assign each session a severity score to prioritize attention. + +Components (sum, clamp 0–100): +- **Completion failure**: 0–25 (`abandoned = 25`) +- **Replanning intensity**: 0–15 +- **Scope instability**: 0–15 +- **Rework shape severity**: 0–15 +- **Prompt sufficiency deficit**: 0–10 (`low = 10`) +- **Root cause impact**: 0–10 (`REPO_FRAGILITY` / `AGENT_ARCHITECTURAL_ERROR` highest) +- **Hotspot recurrence**: 0–10 + +Bands: +- **0–19 Low** +- **20–39 Moderate** +- **40–59 Significant** +- **60–79 High** +- **80–100 Critical** + +Record: +- `session_severity_score` +- `severity_band` +- `severity_drivers` = top 2–4 contributors +- `severity_confidence` + +Use severity as a prioritization signal, not a verdict. Always explain the drivers. +Contextualize severity using session intent so research/exploration sessions are not over-penalized. + +--- + +## Step 7: Subsystem / File Clustering + +Across all conversations, cluster repeated struggle by file, folder, or subsystem. + +For each cluster, calculate: +- number of conversations touching it +- average revisions +- completion rate +- abandonment rate +- common root causes +- average severity + +Goal: identify whether friction is mostly prompt-driven, agent-driven, or concentrated in specific repo areas. + +--- + +## Step 8: Comparative Cohorts + +Compare: +- first-shot successes vs re-planned sessions +- completed vs abandoned +- high prompt sufficiency vs low prompt sufficiency +- narrow-scope vs high-scope-growth +- short sessions vs long sessions +- low-friction subsystems vs high-friction subsystems + +For each comparison, identify: +- what differs materially +- which prompt traits correlate with smoother execution +- which repo traits correlate with repeated struggle + +Do not just restate averages; extract cautious evidence-backed patterns. + +--- + +## Step 9: Non-Obvious Findings + +Generate 3–7 findings that are not simple metric restatements. + +Each finding must include: +- observation +- why it matters +- evidence +- confidence + +Examples of strong findings: +- replans cluster around weak file targeting rather than weak acceptance criteria +- scope growth often begins after initial success, suggesting post-success human expansion +- auth-related struggle is driven more by repo fragility than agent hallucination + +--- + +## Step 10: Report Generation + +Create `session_analysis_report.md` with this structure: + +# 📊 Session Analysis Report — [Project Name] + +**Generated**: [timestamp] +**Conversations Analyzed**: [N] +**Date Range**: [earliest] → [latest] + +## Executive Summary + +| Metric | Value | Rating | +|:---|:---|:---| +| First-Shot Success Rate | X% | 🟢/🟡/🔴 | +| Completion Rate | X% | 🟢/🟡/🔴 | +| Avg Scope Growth | X% | 🟢/🟡/🔴 | +| Replan Rate | X% | 🟢/🟡/🔴 | +| Median Duration | Xm | — | +| Avg Session Severity | X | 🟢/🟡/🔴 | +| High-Severity Sessions | X / N | 🟢/🟡/🔴 | + +Thresholds: +- First-shot: 🟢 >70 / 🟡 40–70 / 🔴 <40 +- Scope growth: 🟢 <15 / 🟡 15–40 / 🔴 >40 +- Replan rate: 🟢 <20 / 🟡 20–50 / 🔴 >50 + +Avg severity guidance: +- 🟢 <25 +- 🟡 25–50 +- 🔴 >50 + +Note: avg severity is an aggregate health signal, not the same as per-session severity bands. + +Then add a short narrative summary of what is going well, what is breaking down, and whether the main issue is prompt quality, repo fragility, workflow discipline, or validation churn. + +## Root Cause Breakdown + +| Root Cause | Count | % | Notes | +|:---|:---|:---|:---| + +## Prompt Sufficiency Analysis +- common traits of high-sufficiency prompts +- common missing inputs in low-sufficiency prompts +- which missing prompt ingredients correlate most with replanning or abandonment + +## Scope Change Analysis +Separate: +- Human-added scope +- Necessary discovered scope +- Agent-introduced scope + +## Rework Shape Analysis +Summarize the main failure patterns across sessions. + +## Friction Hotspots +Show the files/folders/subsystems most associated with replanning, abandonment, verification churn, and high severity. + +## First-Shot Successes +List the cleanest sessions and extract what made them work. + +## Non-Obvious Findings +List 3–7 evidence-backed findings with confidence. + +## Severity Triage +List the highest-severity sessions and say whether the best intervention is: +- prompt improvement +- scope discipline +- targeted skill/workflow +- repo refactor / architecture cleanup +- validation/test harness improvement + +## Recommendations +For each recommendation, use: +- **Observed pattern** +- **Likely cause** +- **Evidence** +- **Change to make** +- **Expected benefit** +- **Confidence** + +## Per-Conversation Breakdown + +| # | Title | Intent | Duration | Scope Δ | Plan Revs | Task Revs | Root Cause | Rework Shape | Severity | Complete? | +|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---| + +--- + +## Step 11: Optional Post-Analysis Improvements + +If appropriate, also: +- update any local project-health or memory artifact (if present) with recurring failure modes and fragile subsystems +- generate `prompt_improvement_tips.md` from high-sufficiency / first-shot-success sessions +- suggest missing skills or workflows when the same subsystem or task sequence repeatedly causes struggle + +Only recommend workflows/skills when the pattern appears repeatedly. + +--- + +## Final Output Standard + +The workflow must produce: +1. metrics summary +2. root-cause diagnosis +3. prompt-sufficiency assessment +4. subsystem/friction map +5. severity triage and prioritization +6. evidence-backed recommendations +7. non-obvious findings + +Prefer explicit uncertainty over fake precision. diff --git a/Skills/analyze-project/examples/sample_session_analysis_report.md b/Skills/analyze-project/examples/sample_session_analysis_report.md new file mode 100644 index 0000000..141a48e --- /dev/null +++ b/Skills/analyze-project/examples/sample_session_analysis_report.md @@ -0,0 +1,78 @@ +# Sample Output: session_analysis_report.md +# Generated by /analyze-project skill on a ~3-week project with ~50 substantive sessions. +# (Trimmed for demo; real reports include full per-conversation breakdown and more cohorts.) + +# 📊 Session Analysis Report — Sample AI Video Studio + +**Generated**: 2026-03-13 +**Conversations Analyzed**: 54 substantive (with artifacts) +**Date Range**: Feb 18 – Mar 13, 2026 + +## Executive Summary + +| Metric | Value | Rating | +|-------------------------|-------------|--------| +| First-Shot Success Rate | 52% | 🟡 | +| Completion Rate | 70% | 🟢 | +| Avg Scope Growth | +58% | 🟡 | +| Replan Rate | 30% | 🟢 | +| Median Duration | ~35 min | 🟢 | +| Avg Revision Intensity | 4.8 versions| 🟡 | +| Abandoned Rate | 22% | 🟡 | + +**Narrative**: High velocity with strong completion on workflow-driven tasks. Main friction is **post-success human scope expansion** — users add "while we're here" features after initial work succeeds, turning narrow tasks into multi-phase epics. Not primarily prompt or agent issues — more workflow discipline. + +## Root Cause Breakdown (non-clean sessions only) + +| Root Cause | % | Notes | +|-----------------------------|-----|--------------------------------------------| +| Human Scope Change | 37% | New features/epics added mid-session after success | +| Legitimate Task Complexity | 26% | Multi-phase builds with expected iteration | +| Repo Fragility | 15% | Hidden coupling, pre-existing bugs | +| Verification Churn | 11% | Late test/build failures | +| Spec Ambiguity | 7% | Vague initial ask | +| Agent Architectural Error | 4% | Rare wrong approach | + +Confidence: **High** for top two (direct evidence from version diffs). + +## Scope Change Analysis Highlights + +**Human-Added** (most common): Starts narrow → grows after Phase 1 succeeds (e.g., T2E QA → A/B testing + demos + editor tools). +**Necessary Discovered**: Hidden deps, missing packages, env issues (e.g., auth bcrypt blocking E2E). +**Agent-Introduced**: Very rare (1 case of over-creating components). + +## Rework Shape Summary + +- Clean execution: 52% +- Progressive expansion: 18% (dominant failure mode) +- Early replan → stable: 11% +- Late verification churn: 7% +- Exploratory/research: 7% +- Abandoned mid-flight: 4% + +**Pattern**: Progressive expansion often follows successful implementation — user adds adjacent work in same session. + +## Friction Hotspots (top areas) + +| Subsystem | Sessions | Avg Revisions | Main Cause | +|------------------------|----------|---------------|---------------------| +| production.py + domain | 8 | 6.2 | Hidden coupling | +| fal.py (model adapter) | 7 | 5.0 | Legitimate complexity | +| billing.py + tests | 6 | 5.5 | Verification churn | +| frontend/ build | 5 | 7.0 | Missing deps/types | +| Auth/bcrypt | 3 | 4.7 | Blocks E2E testing | + +## Non-Obvious Findings (top 3) + +1. **Post-Success Expansion Dominates** — Most scope growth happens *after* initial completion succeeds, not from bad planning. (High confidence) +2. **File Targeting > Acceptance Criteria** — Missing specific files correlates more with replanning (44% vs 12%) than missing criteria. Anchors agent research early. (High) +3. **Frontend Build is Silent Killer** — Late TypeScript/import failures add 2–4 cycles repeatedly. No pre-flight check exists. (High) + +## Recommendations (top 4) + +1. **Split Sessions After Phases** — Start new conversation after successful completion to avoid context bloat and scope creep. Expected: +13% first-shot success. (High) +2. **Enforce File Targeting** — Add pre-check in prompt optimizer to flag missing file/module refs. Expected: halve replan rate. (High) +3. **Add Frontend Preflight** — Run `npm run build` early in frontend-touching sessions. Eliminates common late blockers. (High) +4. **Fix Auth Test Fixture** — Seed test users with plain passwords or bypass bcrypt for local E2E. Unblocks browser testing. (High) + +This sample shows the forensic style: evidence-backed, confidence-rated, focused on actionable patterns rather than raw counts. diff --git a/Skills/antigravity-balance/SKILL.md b/Skills/antigravity-balance/SKILL.md new file mode 100644 index 0000000..7581c43 --- /dev/null +++ b/Skills/antigravity-balance/SKILL.md @@ -0,0 +1,69 @@ +--- +name: antigravity-balance +description: Check Google Antigravity AI model quota/token balance. Use when a user asks about their Antigravity usage, remaining tokens, model limits, quota status, or rate limits. Works by detecting the local Antigravity language server process and querying its API. +--- + +# Antigravity Balance + +Check your Antigravity AI model quota and token balance. + +## Quick Start + +```bash +# Check quota (auto-detects local Antigravity process) +node scripts/agquota.js + +# JSON output for parsing +node scripts/agquota.js --json + +# Verbose output (debugging) +node scripts/agquota.js -v +``` + +## How It Works + +1. **Process Detection**: Finds the running `language_server_macos_arm` (or platform equivalent) process +2. **Extracts Connection Info**: Parses `--extension_server_port` and `--csrf_token` from process args +3. **Port Discovery**: Scans nearby ports to find the HTTPS API endpoint (typically extensionPort + 1) +4. **Queries Local API**: Hits `https://127.0.0.1:{port}/exa.language_server_pb.LanguageServerService/GetUserStatus` +5. **Displays Quota**: Shows remaining percentage, reset time, and model info + +## Output Format + +Default output shows: +- User name, email, and tier +- Model name and remaining quota percentage +- Visual progress bar (color-coded: green >50%, yellow >20%, red ≤20%) +- Reset countdown (e.g., "4h 32m") + +JSON output (`--json`) returns structured data: +```json +{ + "user": { "name": "...", "email": "...", "tier": "..." }, + "models": [ + { "label": "Claude Sonnet 4.5", "remainingPercent": 80, "resetTime": "..." } + ], + "timestamp": "2026-01-28T01:00:00.000Z" +} +``` + +## Requirements + +- Node.js (uses built-in `https` module) +- Antigravity (or Windsurf) must be running + +## Troubleshooting + +If the script fails: +1. Ensure Antigravity/Windsurf is running +2. Check if the language server process exists: `ps aux | grep language_server` +3. The process must have `--app_data_dir antigravity` in its args (distinguishes from other Codeium forks) + +## Platform-Specific Process Names + +| Platform | Process Name | +|----------|--------------| +| macOS (ARM) | `language_server_macos_arm` | +| macOS (Intel) | `language_server_macos` | +| Linux | `language_server_linux` | +| Windows | `language_server_windows_x64.exe` | diff --git a/Skills/antigravity-balance/_meta.json b/Skills/antigravity-balance/_meta.json new file mode 100644 index 0000000..d20d960 --- /dev/null +++ b/Skills/antigravity-balance/_meta.json @@ -0,0 +1,11 @@ +{ + "owner": "finderstrategy-cyber", + "slug": "antigravity-balance", + "displayName": "Antigravity Balance", + "latest": { + "version": "1.0.0", + "publishedAt": 1769563664027, + "commit": "https://github.com/clawdbot/skills/commit/bebc719d7d3d2712df5d389c05a891d02676bf6d" + }, + "history": [] +} diff --git a/Skills/antigravity-design-expert/SKILL.md b/Skills/antigravity-design-expert/SKILL.md new file mode 100644 index 0000000..4e12b7c --- /dev/null +++ b/Skills/antigravity-design-expert/SKILL.md @@ -0,0 +1,42 @@ +--- +name: antigravity-design-expert +description: Core UI/UX engineering skill for building highly interactive, spatial, weightless, and glassmorphism-based web interfaces using GSAP and 3D CSS. +risk: safe +source: community +date_added: "2026-03-07" +--- + +# Antigravity UI & Motion Design Expert + +## 🎯 Role Overview + +You are a world-class UI/UX Engineer specializing in "Antigravity Design." Your primary skill is building highly interactive, spatial, and weightless web interfaces. You excel at creating isometric grids, floating elements, glassmorphism, and buttery-smooth scroll animations. + +## 🛠️ Preferred Tech Stack + +When asked to build or generate UI components, default to the following stack unless instructed otherwise: + +- **Framework:** React / Next.js +- **Styling:** Tailwind CSS (for layout and utility) + Custom CSS for complex 3D transforms +- **Animation:** GSAP (GreenSock) + ScrollTrigger for scroll-linked motion +- **3D Elements:** React Three Fiber (R3F) or CSS 3D Transforms (`rotateX`, `rotateY`, `perspective`) + +## 📐 Design Principles (The "Antigravity" Vibe) + +- **Weightlessness:** UI cards and elements should appear to float. Use layered, soft, diffused drop-shadows (e.g., `box-shadow: 0 20px 40px rgba(0,0,0,0.05)`). +- **Spatial Depth:** Utilize Z-axis layering. Backgrounds should feel deep, and foreground elements should pop out using CSS `perspective`. +- **Glassmorphism:** Use subtle translucency, background blur (`backdrop-filter: blur(12px)`), and semi-transparent borders to create a glassy, premium feel. +- **Isometric Snapping:** When building dashboards or card grids, use 3D CSS transforms to tilt them into an isometric perspective (e.g., `transform: rotateX(60deg) rotateZ(-45deg)`). + +## 🎬 Motion & Animation Rules + +- **Never snap instantly:** All state changes (hover, focus, active) must have smooth transitions (minimum `0.3s ease-out`). +- **Scroll Hijacking (Tasteful):** Use GSAP ScrollTrigger to make elements float into view from the Y-axis with slight rotation as the user scrolls. +- **Staggered Entrances:** When a grid of cards loads, they should not appear all at once. Stagger their entrance animations by `0.1s` so they drop in like dominoes. +- **Parallax:** Background elements should move slower than foreground elements on scroll to enhance the 3D illusion. + +## 🚧 Execution Constraints + +- Always write modular, reusable components. +- Ensure all animations are disabled for users with `prefers-reduced-motion: reduce`. +- Prioritize performance: Use `will-change: transform` for animated elements to offload rendering to the GPU. Do not animate expensive properties like `box-shadow` or `filter` continuously. diff --git a/Skills/antigravity-skill-orchestrator/README.md b/Skills/antigravity-skill-orchestrator/README.md new file mode 100644 index 0000000..c1fb179 --- /dev/null +++ b/Skills/antigravity-skill-orchestrator/README.md @@ -0,0 +1,32 @@ +# antigravity-skill-orchestrator + +A meta-skill package for the Antigravity IDE ecosystem. + +## Overview + +The `antigravity-skill-orchestrator` is an intelligent meta-skill that enhances an AI agent's ability to handle complex, multi-domain tasks. It provides strict guidelines and workflows enabling the agent to: + +1. **Evaluate Task Complexity**: Implementing guardrails to prevent the overuse of specialized skills on simple, straightforward tasks. +2. **Dynamically Select Skills**: Identifying the best combination of skills for a given complex problem. +3. **Track Skill Combinations**: Utilizing the `agent-memory-mcp` skill to store, search, and retrieve successful skill combinations for future reference, building institutional knowledge over time. + +## Installation + +This skill is designed to be used within the Antigravity IDE and integrated alongside the existing suite of AWESOME skills. + +Make sure you have the `agent-memory-mcp` skill installed and running to take full advantage of the combination tracking feature. + +## Usage + +When executing a prompt with an AI assistant via the Antigravity IDE, you can invoke this skill: + +```bash +@antigravity-skill-orchestrator Please build a comprehensive dashboard integrating fetching live data, an interactive UI, and performance optimizations. +``` + +The agent will then follow the directives in the `SKILL.md` to break down the task, search memory for similar challenges, assemble the right team of skills (e.g., `@react-patterns` + `@nodejs-backend-patterns`), and execute the task without over-complicating it. + +--- + +**Author:** [Wahid](https://github.com/wahidzzz) +**Source:** [antigravity-skill-orchestrator](https://github.com/wahidzzz/antigravity-skill-orchestrator) diff --git a/Skills/antigravity-skill-orchestrator/SKILL.md b/Skills/antigravity-skill-orchestrator/SKILL.md new file mode 100644 index 0000000..7bc1ebd --- /dev/null +++ b/Skills/antigravity-skill-orchestrator/SKILL.md @@ -0,0 +1,123 @@ +--- +name: antigravity-skill-orchestrator +description: "A meta-skill that understands task requirements, dynamically selects appropriate skills, tracks successful skill combinations using agent-memory-mcp, and prevents skill overuse for simple tasks." +category: meta +risk: safe +source: community +tags: "[orchestration, meta-skill, agent-memory, task-evaluation]" +date_added: "2026-03-13" +--- + +# antigravity-skill-orchestrator + +## Overview + +The `skill-orchestrator` is a meta-skill designed to enhance the AI agent's ability to tackle complex problems. It acts as an intelligent coordinator that first evaluates the complexity of a user's request. Based on that evaluation, it determines if specialized skills are needed. If they are, it selects the right combination of skills, explicitly tracks these combinations using `@agent-memory-mcp` for future reference, and guides the agent through the execution process. Crucially, it includes strict guardrails to prevent the unnecessary use of specialized skills for simple tasks that can be solved with baseline capabilities. + +## When to Use This Skill + +- Use when tackling a complex, multi-step problem that likely requires multiple domains of expertise. +- Use when you are unsure which specific skills are best suited for a given user request, and need to discover them from the broader ecosystem. +- Use when the user explicitly asks to "orchestrate", "combine skills", or "use the best tools for the job" on a significant task. +- Use when you want to look up previously successful combinations of skills for a specific type of problem. + +## Core Concepts + +### Task Evaluation Guardrails +Not every task requires a specialized skill. For straightforward issues (e.g., small CSS fixes, simple script writing, renaming a variable), **DO NOT USE** specialized skills. Over-engineering simple tasks wastes tokens and time. + +Additionally, the orchestrator is strictly forbidden from creating new skills. Its sole purpose is to combine and use existing skills provided by the community or present in the current environment. + +Before invoking any skills, evaluate the task: +1. **Is the task simple/contained?** Solve it directly using the agent's ordinary file editing, search, and terminal capabilities available in the current environment. +2. **Is the task complex/multi-domain?** Only then should you proceed to orchestrate skills. + +### Skill Selection & Combinations +When a task is deemed complex, identify the necessary domains (e.g., frontend, database, deployment). Search available skills in the current environment to find the most relevant ones. If the required skills are not found locally, consult the master skill catalog. + +### Master Skill Catalog +The Antigravity ecosystem maintains a master catalog of highly curated skills at `https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/CATALOG.md`. When local skills are insufficient, fetch this catalog to discover appropriate skills across the 9 primary categories: +- `architecture` +- `business` +- `data-ai` +- `development` +- `general` +- `infrastructure` +- `security` +- `testing` +- `workflow` + +### Memory Integration (`@agent-memory-mcp`) +To build institutional knowledge, the orchestrator relies on the `agent-memory-mcp` skill to record and retrieve successful skill combinations. + +## Step-by-Step Guide + +### 1. Task Evaluation & Guardrail Check +[Triggered when facing a new user request that might need skills] +1. Read the user's request. +2. Ask yourself: "Can I solve this efficiently with just basic file editing and terminal commands?" +3. If YES: Proceed without invoking specialized skills. Stop the orchestration here. +4. If NO: Proceed to step 2. + +### 2. Retrieve Past Knowledge +[Triggered if the task is complex] +1. Use the `memory_search` tool provided by `agent-memory-mcp` to search for similar past tasks. + - Example query: `memory_search({ query: "skill combination for react native and firebase", type: "skill_combination" })` +2. If a working combination exists, read the details using `memory_read`. +3. If no relevant memory exists, proceed to Step 3. + +### 3. Discover and Select Skills +[Triggered if no past knowledge covers this task] +1. Analyze the core requirements (e.g., "needs a React UI, a Node.js backend, and a PostgreSQL database"). +2. Query the locally available skills using the current environment's skill list or equivalent discovery mechanism to find the best match for each requirement. +3. **If local skills are insufficient**, fetch the master catalog with the web or command-line retrieval tools available in the current environment: `https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/CATALOG.md`. +4. Scan the catalog's 9 main categories to identify the appropriate skills to bring into the current context. +5. Select the minimal set of skills needed. **Do not over-select.** + +### 4. Apply Skills and Track the Combination +[Triggered after executing the task using the selected skills] +1. Assume the task was completed successfully using a new combination of skills (e.g., `@react-patterns` + `@nodejs-backend-patterns` + `@postgresql`). +2. Record this combination for future use using `memory_write` from `agent-memory-mcp`. + - Ensure the type is `skill_combination`. + - Provide a descriptive key and content detailing why these skills worked well together. + +## Examples + +### Example 1: Handling a Simple Task (The Guardrail in Action) +**User Request:** "Change the color of the submit button in `index.css` to blue." +**Action:** The skill orchestrator evaluates the task. It determines this is a "simple/contained" task. It **does not** invoke specialized skills. It directly edits `index.css`. + +### Example 2: Recording a New Skill Combination +```javascript +// Using the agent-memory-mcp tool after successfully building a complex feature +memory_write({ + key: "combination-ecommerce-checkout", + type: "skill_combination", + content: "For e-commerce checkouts, using @stripe-integration combined with @react-state-management and @postgresql effectively handles the full flow from UI state to payment processing to order recording.", + tags: ["ecommerce", "checkout", "stripe", "react"] +}) +``` + +### Example 3: Retrieving a Combination +```javascript +// At the start of a new e-commerce task +memory_search({ + query: "ecommerce checkout", + type: "skill_combination" +}) +// Returns the key "combination-ecommerce-checkout", which you then read: +memory_read({ key: "combination-ecommerce-checkout" }) +``` + +## Best Practices + +- ✅ **Do:** Always evaluate task complexity *before* looking for skills. +- ✅ **Do:** Keep the number of orchestrated skills as small as possible. +- ✅ **Do:** Use highly descriptive keys when running `memory_write` so they are easy to search later. +- ❌ **Don't:** Use this skill for simple bug fixes or UI tweaks. +- ❌ **Don't:** Combine skills that have overlapping and conflicting instructions without a clear plan to resolve the conflict. +- ❌ **Don't:** Attempt to construct, generate, or create new skills. Only combine what is available. + +## Related Skills + +- `@agent-memory-mcp` - Essential for this skill to function. Provides the persistent storage for skill combinations. diff --git a/Skills/new-skills/analyze-project/SKILL.md b/Skills/new-skills/analyze-project/SKILL.md new file mode 100644 index 0000000..20e8dae --- /dev/null +++ b/Skills/new-skills/analyze-project/SKILL.md @@ -0,0 +1,432 @@ +--- +name: analyze-project +description: Forensic root cause analyzer for Antigravity sessions. Classifies scope deltas, rework patterns, root causes, hotspots, and auto-improves prompts/health. +version: "1.0" +tags: [analysis, diagnostics, meta, root-cause, project-health, session-review] +--- + +# /analyze-project — Root Cause Analyst Workflow + +Analyze AI-assisted coding sessions in `~/.gemini/antigravity/brain/` and produce a report that explains not just **what happened**, but **why it happened**, **who/what caused it**, and **what should change next time**. + +## Goal + +For each session, determine: + +1. What changed from the initial ask to the final executed work +2. Whether the main cause was: + - user/spec + - agent + - repo/codebase + - validation/testing + - legitimate task complexity +3. Whether the opening prompt was sufficient +4. Which files/subsystems repeatedly correlate with struggle +5. What changes would most improve future sessions + +## Global Rules + +- Treat `.resolved.N` counts as **iteration signals**, not proof of failure +- Separate **human-added scope**, **necessary discovered scope**, and **agent-introduced scope** +- Separate **agent error** from **repo friction** +- Every diagnosis must include **evidence** and **confidence** +- Confidence levels: + - **High** = direct artifact/timestamp evidence + - **Medium** = multiple supporting signals + - **Low** = plausible inference, not directly proven +- Evidence precedence: + - artifact contents > timestamps > metadata summaries > inference +- If evidence is weak, say so + +--- + +## Step 0.5: Session Intent Classification + +Classify the primary session intent from objective + artifacts: + +- `DELIVERY` +- `DEBUGGING` +- `REFACTOR` +- `RESEARCH` +- `EXPLORATION` +- `AUDIT_ANALYSIS` + +Record: +- `session_intent` +- `session_intent_confidence` + +Use intent to contextualize severity and rework shape. +Do not judge exploratory or research sessions by the same standards as narrow delivery sessions. + +--- + +## Step 1: Discover Conversations + +1. Read available conversation summaries from system context +2. List conversation folders in the user’s Antigravity `brain/` directory +3. Build a conversation index with: + - `conversation_id` + - `title` + - `objective` + - `created` + - `last_modified` +4. If the user supplied a keyword/path, filter to matching conversations; otherwise analyze all + +Output: indexed list of conversations to analyze. + +--- + +## Step 2: Extract Session Evidence + +For each conversation, read if present: + +### Core artifacts +- `task.md` +- `implementation_plan.md` +- `walkthrough.md` + +### Metadata +- `*.metadata.json` + +### Version snapshots +- `task.md.resolved.0 ... N` +- `implementation_plan.md.resolved.0 ... N` +- `walkthrough.md.resolved.0 ... N` + +### Additional signals +- other `.md` artifacts +- timestamps across artifact updates +- file/folder/subsystem names mentioned in plans/walkthroughs +- validation/testing language +- explicit acceptance criteria, constraints, non-goals, and file targets + +Record per conversation: + +#### Lifecycle +- `has_task` +- `has_plan` +- `has_walkthrough` +- `is_completed` +- `is_abandoned_candidate` = task exists but no walkthrough + +#### Revision / change volume +- `task_versions` +- `plan_versions` +- `walkthrough_versions` +- `extra_artifacts` + +#### Scope +- `task_items_initial` +- `task_items_final` +- `task_completed_pct` +- `scope_delta_raw` +- `scope_creep_pct_raw` + +#### Timing +- `created_at` +- `completed_at` +- `duration_minutes` + +#### Content / quality +- `objective_text` +- `initial_plan_summary` +- `final_plan_summary` +- `initial_task_excerpt` +- `final_task_excerpt` +- `walkthrough_summary` +- `mentioned_files_or_subsystems` +- `validation_requirements_present` +- `acceptance_criteria_present` +- `non_goals_present` +- `scope_boundaries_present` +- `file_targets_present` +- `constraints_present` + +--- + +## Step 3: Prompt Sufficiency + +Score the opening request on a 0–2 scale for: + +- **Clarity** +- **Boundedness** +- **Testability** +- **Architectural specificity** +- **Constraint awareness** +- **Dependency awareness** + +Create: +- `prompt_sufficiency_score` +- `prompt_sufficiency_band` = High / Medium / Low + +Then note which missing prompt ingredients likely contributed to later friction. + +Do not punish short prompts by default; a narrow, obvious task can still have high sufficiency. + +--- + +## Step 4: Scope Change Classification + +Classify scope change into: + +- **Human-added scope** — new asks beyond the original task +- **Necessary discovered scope** — work required to complete the original task correctly +- **Agent-introduced scope** — likely unnecessary work introduced by the agent + +Record: +- `scope_change_type_primary` +- `scope_change_type_secondary` (optional) +- `scope_change_confidence` +- evidence + +Keep one short example in mind for calibration: +- Human-added: “also refactor nearby code while you’re here” +- Necessary discovered: hidden dependency must be fixed for original task to work +- Agent-introduced: extra cleanup or redesign not requested and not required + +--- + +## Step 5: Rework Shape + +Classify each session into one primary pattern: + +- **Clean execution** +- **Early replan then stable finish** +- **Progressive scope expansion** +- **Reopen/reclose churn** +- **Late-stage verification churn** +- **Abandoned mid-flight** +- **Exploratory / research session** + +Record: +- `rework_shape` +- `rework_shape_confidence` +- evidence + +--- + +## Step 6: Root Cause Analysis + +For every non-clean session, assign: + +### Primary root cause +One of: +- `SPEC_AMBIGUITY` +- `HUMAN_SCOPE_CHANGE` +- `REPO_FRAGILITY` +- `AGENT_ARCHITECTURAL_ERROR` +- `VERIFICATION_CHURN` +- `LEGITIMATE_TASK_COMPLEXITY` + +### Secondary root cause +Optional if materially relevant + +### Root-cause guidance +- **SPEC_AMBIGUITY**: opening ask lacked boundaries, targets, criteria, or constraints +- **HUMAN_SCOPE_CHANGE**: scope expanded because the user broadened the task +- **REPO_FRAGILITY**: hidden coupling, brittle files, unclear architecture, or environment issues forced extra work +- **AGENT_ARCHITECTURAL_ERROR**: wrong files, wrong assumptions, wrong approach, hallucinated structure +- **VERIFICATION_CHURN**: implementation mostly worked, but testing/validation caused loops +- **LEGITIMATE_TASK_COMPLEXITY**: revisions were expected for the difficulty and not clearly avoidable + +Every root-cause assignment must include: +- evidence +- why stronger alternative causes were rejected +- confidence + +--- + +## Step 6.5: Session Severity Scoring (0–100) + +Assign each session a severity score to prioritize attention. + +Components (sum, clamp 0–100): +- **Completion failure**: 0–25 (`abandoned = 25`) +- **Replanning intensity**: 0–15 +- **Scope instability**: 0–15 +- **Rework shape severity**: 0–15 +- **Prompt sufficiency deficit**: 0–10 (`low = 10`) +- **Root cause impact**: 0–10 (`REPO_FRAGILITY` / `AGENT_ARCHITECTURAL_ERROR` highest) +- **Hotspot recurrence**: 0–10 + +Bands: +- **0–19 Low** +- **20–39 Moderate** +- **40–59 Significant** +- **60–79 High** +- **80–100 Critical** + +Record: +- `session_severity_score` +- `severity_band` +- `severity_drivers` = top 2–4 contributors +- `severity_confidence` + +Use severity as a prioritization signal, not a verdict. Always explain the drivers. +Contextualize severity using session intent so research/exploration sessions are not over-penalized. + +--- + +## Step 7: Subsystem / File Clustering + +Across all conversations, cluster repeated struggle by file, folder, or subsystem. + +For each cluster, calculate: +- number of conversations touching it +- average revisions +- completion rate +- abandonment rate +- common root causes +- average severity + +Goal: identify whether friction is mostly prompt-driven, agent-driven, or concentrated in specific repo areas. + +--- + +## Step 8: Comparative Cohorts + +Compare: +- first-shot successes vs re-planned sessions +- completed vs abandoned +- high prompt sufficiency vs low prompt sufficiency +- narrow-scope vs high-scope-growth +- short sessions vs long sessions +- low-friction subsystems vs high-friction subsystems + +For each comparison, identify: +- what differs materially +- which prompt traits correlate with smoother execution +- which repo traits correlate with repeated struggle + +Do not just restate averages; extract cautious evidence-backed patterns. + +--- + +## Step 9: Non-Obvious Findings + +Generate 3–7 findings that are not simple metric restatements. + +Each finding must include: +- observation +- why it matters +- evidence +- confidence + +Examples of strong findings: +- replans cluster around weak file targeting rather than weak acceptance criteria +- scope growth often begins after initial success, suggesting post-success human expansion +- auth-related struggle is driven more by repo fragility than agent hallucination + +--- + +## Step 10: Report Generation + +Create `session_analysis_report.md` with this structure: + +# 📊 Session Analysis Report — [Project Name] + +**Generated**: [timestamp] +**Conversations Analyzed**: [N] +**Date Range**: [earliest] → [latest] + +## Executive Summary + +| Metric | Value | Rating | +|:---|:---|:---| +| First-Shot Success Rate | X% | 🟢/🟡/🔴 | +| Completion Rate | X% | 🟢/🟡/🔴 | +| Avg Scope Growth | X% | 🟢/🟡/🔴 | +| Replan Rate | X% | 🟢/🟡/🔴 | +| Median Duration | Xm | — | +| Avg Session Severity | X | 🟢/🟡/🔴 | +| High-Severity Sessions | X / N | 🟢/🟡/🔴 | + +Thresholds: +- First-shot: 🟢 >70 / 🟡 40–70 / 🔴 <40 +- Scope growth: 🟢 <15 / 🟡 15–40 / 🔴 >40 +- Replan rate: 🟢 <20 / 🟡 20–50 / 🔴 >50 + +Avg severity guidance: +- 🟢 <25 +- 🟡 25–50 +- 🔴 >50 + +Note: avg severity is an aggregate health signal, not the same as per-session severity bands. + +Then add a short narrative summary of what is going well, what is breaking down, and whether the main issue is prompt quality, repo fragility, workflow discipline, or validation churn. + +## Root Cause Breakdown + +| Root Cause | Count | % | Notes | +|:---|:---|:---|:---| + +## Prompt Sufficiency Analysis +- common traits of high-sufficiency prompts +- common missing inputs in low-sufficiency prompts +- which missing prompt ingredients correlate most with replanning or abandonment + +## Scope Change Analysis +Separate: +- Human-added scope +- Necessary discovered scope +- Agent-introduced scope + +## Rework Shape Analysis +Summarize the main failure patterns across sessions. + +## Friction Hotspots +Show the files/folders/subsystems most associated with replanning, abandonment, verification churn, and high severity. + +## First-Shot Successes +List the cleanest sessions and extract what made them work. + +## Non-Obvious Findings +List 3–7 evidence-backed findings with confidence. + +## Severity Triage +List the highest-severity sessions and say whether the best intervention is: +- prompt improvement +- scope discipline +- targeted skill/workflow +- repo refactor / architecture cleanup +- validation/test harness improvement + +## Recommendations +For each recommendation, use: +- **Observed pattern** +- **Likely cause** +- **Evidence** +- **Change to make** +- **Expected benefit** +- **Confidence** + +## Per-Conversation Breakdown + +| # | Title | Intent | Duration | Scope Δ | Plan Revs | Task Revs | Root Cause | Rework Shape | Severity | Complete? | +|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---| + +--- + +## Step 11: Optional Post-Analysis Improvements + +If appropriate, also: +- update any local project-health or memory artifact (if present) with recurring failure modes and fragile subsystems +- generate `prompt_improvement_tips.md` from high-sufficiency / first-shot-success sessions +- suggest missing skills or workflows when the same subsystem or task sequence repeatedly causes struggle + +Only recommend workflows/skills when the pattern appears repeatedly. + +--- + +## Final Output Standard + +The workflow must produce: +1. metrics summary +2. root-cause diagnosis +3. prompt-sufficiency assessment +4. subsystem/friction map +5. severity triage and prioritization +6. evidence-backed recommendations +7. non-obvious findings + +Prefer explicit uncertainty over fake precision. diff --git a/Skills/new-skills/analyze-project/examples/sample_session_analysis_report.md b/Skills/new-skills/analyze-project/examples/sample_session_analysis_report.md new file mode 100644 index 0000000..141a48e --- /dev/null +++ b/Skills/new-skills/analyze-project/examples/sample_session_analysis_report.md @@ -0,0 +1,78 @@ +# Sample Output: session_analysis_report.md +# Generated by /analyze-project skill on a ~3-week project with ~50 substantive sessions. +# (Trimmed for demo; real reports include full per-conversation breakdown and more cohorts.) + +# 📊 Session Analysis Report — Sample AI Video Studio + +**Generated**: 2026-03-13 +**Conversations Analyzed**: 54 substantive (with artifacts) +**Date Range**: Feb 18 – Mar 13, 2026 + +## Executive Summary + +| Metric | Value | Rating | +|-------------------------|-------------|--------| +| First-Shot Success Rate | 52% | 🟡 | +| Completion Rate | 70% | 🟢 | +| Avg Scope Growth | +58% | 🟡 | +| Replan Rate | 30% | 🟢 | +| Median Duration | ~35 min | 🟢 | +| Avg Revision Intensity | 4.8 versions| 🟡 | +| Abandoned Rate | 22% | 🟡 | + +**Narrative**: High velocity with strong completion on workflow-driven tasks. Main friction is **post-success human scope expansion** — users add "while we're here" features after initial work succeeds, turning narrow tasks into multi-phase epics. Not primarily prompt or agent issues — more workflow discipline. + +## Root Cause Breakdown (non-clean sessions only) + +| Root Cause | % | Notes | +|-----------------------------|-----|--------------------------------------------| +| Human Scope Change | 37% | New features/epics added mid-session after success | +| Legitimate Task Complexity | 26% | Multi-phase builds with expected iteration | +| Repo Fragility | 15% | Hidden coupling, pre-existing bugs | +| Verification Churn | 11% | Late test/build failures | +| Spec Ambiguity | 7% | Vague initial ask | +| Agent Architectural Error | 4% | Rare wrong approach | + +Confidence: **High** for top two (direct evidence from version diffs). + +## Scope Change Analysis Highlights + +**Human-Added** (most common): Starts narrow → grows after Phase 1 succeeds (e.g., T2E QA → A/B testing + demos + editor tools). +**Necessary Discovered**: Hidden deps, missing packages, env issues (e.g., auth bcrypt blocking E2E). +**Agent-Introduced**: Very rare (1 case of over-creating components). + +## Rework Shape Summary + +- Clean execution: 52% +- Progressive expansion: 18% (dominant failure mode) +- Early replan → stable: 11% +- Late verification churn: 7% +- Exploratory/research: 7% +- Abandoned mid-flight: 4% + +**Pattern**: Progressive expansion often follows successful implementation — user adds adjacent work in same session. + +## Friction Hotspots (top areas) + +| Subsystem | Sessions | Avg Revisions | Main Cause | +|------------------------|----------|---------------|---------------------| +| production.py + domain | 8 | 6.2 | Hidden coupling | +| fal.py (model adapter) | 7 | 5.0 | Legitimate complexity | +| billing.py + tests | 6 | 5.5 | Verification churn | +| frontend/ build | 5 | 7.0 | Missing deps/types | +| Auth/bcrypt | 3 | 4.7 | Blocks E2E testing | + +## Non-Obvious Findings (top 3) + +1. **Post-Success Expansion Dominates** — Most scope growth happens *after* initial completion succeeds, not from bad planning. (High confidence) +2. **File Targeting > Acceptance Criteria** — Missing specific files correlates more with replanning (44% vs 12%) than missing criteria. Anchors agent research early. (High) +3. **Frontend Build is Silent Killer** — Late TypeScript/import failures add 2–4 cycles repeatedly. No pre-flight check exists. (High) + +## Recommendations (top 4) + +1. **Split Sessions After Phases** — Start new conversation after successful completion to avoid context bloat and scope creep. Expected: +13% first-shot success. (High) +2. **Enforce File Targeting** — Add pre-check in prompt optimizer to flag missing file/module refs. Expected: halve replan rate. (High) +3. **Add Frontend Preflight** — Run `npm run build` early in frontend-touching sessions. Eliminates common late blockers. (High) +4. **Fix Auth Test Fixture** — Seed test users with plain passwords or bypass bcrypt for local E2E. Unblocks browser testing. (High) + +This sample shows the forensic style: evidence-backed, confidence-rated, focused on actionable patterns rather than raw counts. diff --git a/Skills/new-skills/antigravity-balance/SKILL.md b/Skills/new-skills/antigravity-balance/SKILL.md new file mode 100644 index 0000000..7581c43 --- /dev/null +++ b/Skills/new-skills/antigravity-balance/SKILL.md @@ -0,0 +1,69 @@ +--- +name: antigravity-balance +description: Check Google Antigravity AI model quota/token balance. Use when a user asks about their Antigravity usage, remaining tokens, model limits, quota status, or rate limits. Works by detecting the local Antigravity language server process and querying its API. +--- + +# Antigravity Balance + +Check your Antigravity AI model quota and token balance. + +## Quick Start + +```bash +# Check quota (auto-detects local Antigravity process) +node scripts/agquota.js + +# JSON output for parsing +node scripts/agquota.js --json + +# Verbose output (debugging) +node scripts/agquota.js -v +``` + +## How It Works + +1. **Process Detection**: Finds the running `language_server_macos_arm` (or platform equivalent) process +2. **Extracts Connection Info**: Parses `--extension_server_port` and `--csrf_token` from process args +3. **Port Discovery**: Scans nearby ports to find the HTTPS API endpoint (typically extensionPort + 1) +4. **Queries Local API**: Hits `https://127.0.0.1:{port}/exa.language_server_pb.LanguageServerService/GetUserStatus` +5. **Displays Quota**: Shows remaining percentage, reset time, and model info + +## Output Format + +Default output shows: +- User name, email, and tier +- Model name and remaining quota percentage +- Visual progress bar (color-coded: green >50%, yellow >20%, red ≤20%) +- Reset countdown (e.g., "4h 32m") + +JSON output (`--json`) returns structured data: +```json +{ + "user": { "name": "...", "email": "...", "tier": "..." }, + "models": [ + { "label": "Claude Sonnet 4.5", "remainingPercent": 80, "resetTime": "..." } + ], + "timestamp": "2026-01-28T01:00:00.000Z" +} +``` + +## Requirements + +- Node.js (uses built-in `https` module) +- Antigravity (or Windsurf) must be running + +## Troubleshooting + +If the script fails: +1. Ensure Antigravity/Windsurf is running +2. Check if the language server process exists: `ps aux | grep language_server` +3. The process must have `--app_data_dir antigravity` in its args (distinguishes from other Codeium forks) + +## Platform-Specific Process Names + +| Platform | Process Name | +|----------|--------------| +| macOS (ARM) | `language_server_macos_arm` | +| macOS (Intel) | `language_server_macos` | +| Linux | `language_server_linux` | +| Windows | `language_server_windows_x64.exe` | diff --git a/Skills/new-skills/antigravity-balance/_meta.json b/Skills/new-skills/antigravity-balance/_meta.json new file mode 100644 index 0000000..d20d960 --- /dev/null +++ b/Skills/new-skills/antigravity-balance/_meta.json @@ -0,0 +1,11 @@ +{ + "owner": "finderstrategy-cyber", + "slug": "antigravity-balance", + "displayName": "Antigravity Balance", + "latest": { + "version": "1.0.0", + "publishedAt": 1769563664027, + "commit": "https://github.com/clawdbot/skills/commit/bebc719d7d3d2712df5d389c05a891d02676bf6d" + }, + "history": [] +} diff --git a/Skills/new-skills/antigravity-design-expert/SKILL.md b/Skills/new-skills/antigravity-design-expert/SKILL.md new file mode 100644 index 0000000..4e12b7c --- /dev/null +++ b/Skills/new-skills/antigravity-design-expert/SKILL.md @@ -0,0 +1,42 @@ +--- +name: antigravity-design-expert +description: Core UI/UX engineering skill for building highly interactive, spatial, weightless, and glassmorphism-based web interfaces using GSAP and 3D CSS. +risk: safe +source: community +date_added: "2026-03-07" +--- + +# Antigravity UI & Motion Design Expert + +## 🎯 Role Overview + +You are a world-class UI/UX Engineer specializing in "Antigravity Design." Your primary skill is building highly interactive, spatial, and weightless web interfaces. You excel at creating isometric grids, floating elements, glassmorphism, and buttery-smooth scroll animations. + +## 🛠️ Preferred Tech Stack + +When asked to build or generate UI components, default to the following stack unless instructed otherwise: + +- **Framework:** React / Next.js +- **Styling:** Tailwind CSS (for layout and utility) + Custom CSS for complex 3D transforms +- **Animation:** GSAP (GreenSock) + ScrollTrigger for scroll-linked motion +- **3D Elements:** React Three Fiber (R3F) or CSS 3D Transforms (`rotateX`, `rotateY`, `perspective`) + +## 📐 Design Principles (The "Antigravity" Vibe) + +- **Weightlessness:** UI cards and elements should appear to float. Use layered, soft, diffused drop-shadows (e.g., `box-shadow: 0 20px 40px rgba(0,0,0,0.05)`). +- **Spatial Depth:** Utilize Z-axis layering. Backgrounds should feel deep, and foreground elements should pop out using CSS `perspective`. +- **Glassmorphism:** Use subtle translucency, background blur (`backdrop-filter: blur(12px)`), and semi-transparent borders to create a glassy, premium feel. +- **Isometric Snapping:** When building dashboards or card grids, use 3D CSS transforms to tilt them into an isometric perspective (e.g., `transform: rotateX(60deg) rotateZ(-45deg)`). + +## 🎬 Motion & Animation Rules + +- **Never snap instantly:** All state changes (hover, focus, active) must have smooth transitions (minimum `0.3s ease-out`). +- **Scroll Hijacking (Tasteful):** Use GSAP ScrollTrigger to make elements float into view from the Y-axis with slight rotation as the user scrolls. +- **Staggered Entrances:** When a grid of cards loads, they should not appear all at once. Stagger their entrance animations by `0.1s` so they drop in like dominoes. +- **Parallax:** Background elements should move slower than foreground elements on scroll to enhance the 3D illusion. + +## 🚧 Execution Constraints + +- Always write modular, reusable components. +- Ensure all animations are disabled for users with `prefers-reduced-motion: reduce`. +- Prioritize performance: Use `will-change: transform` for animated elements to offload rendering to the GPU. Do not animate expensive properties like `box-shadow` or `filter` continuously. diff --git a/Skills/new-skills/antigravity-manager/SKILL.md b/Skills/new-skills/antigravity-manager/SKILL.md new file mode 100644 index 0000000..3c5e317 --- /dev/null +++ b/Skills/new-skills/antigravity-manager/SKILL.md @@ -0,0 +1,88 @@ +--- +name: antigravity-manager +description: Comprehensive guide to Antigravity Manager architecture, workflows, and development. Use this to understand how to work on the project. +--- + +# Antigravity Manager Developer Guide + +## 🏗️ Architecture Overview + +Antigravity Manager is a hybrid Desktop Application built with Electron, React, and NestJS. It follows a modular architecture where the frontend (Renderer) communicates with the backend (Main) via type-safe IPC (ORPC). + +```mermaid +graph TD + User[User Interface] -->|React/Vite| Renderer[Renderer Process] + Renderer -->|ORPC Client| IPC[IPC Layer] + IPC -->|ORPC Router| Main[Main Process] + Main -->|Bootstraps| Server[NestJS Server] + Main -->|Calls| Services[Service Layer] + Services -->|Read/Write| DB[(SQLite Database)] + Services -->|HTTP| Cloud[Cloud APIs (Google/Anthropic)] +``` + +### Key Technologies + +- **Frontend**: React 19, TailwindCSS v4, TanStack Router, TanStack Query. +- **Backend**: Electron (Main), NestJS (Core Logic), Better-SQLite3 (Data). +- **Communication**: ORPC (Type-safe IPC wrapper around Electron IPC). +- **Build**: Electron Forge + Vite. + +## 📂 Directory Structure + +- `src/main.ts`: Electron Main Process entry point. +- `src/preload.ts`: Bridge between Main and Renderer. +- `src/renderer.tsx`: React App entry point. +- `src/components/`: Reusable React UI components (Radix UI based). +- `src/ipc/`: IPC Routers and Handlers (Domain logic). + - `router.ts`: Main ORPC router definition. + - `account/`, `cloud/`, `database/`: Domain-specific handlers. +- `src/server/`: NestJS application modules (proxies/gateways). +- `src/services/`: Core business logic (framework agnostic). + - `GoogleAPIService.ts`: Gemini/Cloud interactions. + - `AutoSwitchService.ts`: Account rotation logic. +- `src/routes/`: Frontend routing definitions (File-based). + +## 🚀 Development Workflow + +### Prerequisites + +- Node.js 18+ +- npm (Project uses `package-lock.json`) + +### Common Commands + +- **Start Dev Server**: `npm start` +- **Lint Code**: `npm run lint` +- **Unit Test**: `npm run test:unit` +- **E2E Test**: `npm run test:e2e` +- **Build Production**: `npm run make` + +## 🧠 Core Concepts + +### IPC Communication (ORPC) + +The project uses `orpc` for type-safe communication. + +- **Define**: Create a router in `src/ipc/router.ts` with Zod schemas. +- **Implement**: Add logic in handlers (e.g., `src/ipc/account/handler.ts`). +- **Call**: Use the generated client in React components. + +### Database Access + +Data is stored in a local SQLite file (`test.db` in dev, user data in prod). + +- Use `Better-SQLite3` for direct access. +- Logic should be encapsulated in `src/services` or `src/ipc`. + +### Account Management + +- Accounts are added via OAuth (Google/Claude). +- `GoogleAPIService` handles token exchange and refreshing. +- `AutoSwitchService` monitors usage and switches active accounts automatically. + +## ⚠️ Critical Rules + +1. **Type Safety**: strict TypeScript usage; Zod for runtime validation. +2. **Components**: Use `src/components/ui` (Radix primitives) for consistency. +3. **Async**: Handle all IPC/DB calls asynchronously with try/catch. +4. **Security**: Never commit secrets. API keys are user-provided or encrypted locally. diff --git a/Skills/new-skills/antigravity-rotator/SKILL.md b/Skills/new-skills/antigravity-rotator/SKILL.md new file mode 100644 index 0000000..48aa9c9 --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/SKILL.md @@ -0,0 +1,64 @@ +--- +name: antigravity-rotator +description: Google Antigravity 模型全自动运维方案。提供多账号自动轮换、优先级调度、会话热更新以及赛博朋克风仪表盘。使用场景包括:(1) 自动化管理多个 Antigravity 账号,(2) 监控配额并自动切换,(3) 在不重启会话的情况下更新模型。 +--- + +# Antigravity Rotator (反重力轮换器) 🚀 + +本 Skill 旨在为 OpenClaw 提供一套确定性的 Google Antigravity 模型运维工作流。它将复杂的配额监控与自动化调度封装为简单的 Action。 + +## 🎯 触发场景 (When to use) +- 当用户拥有多个 Antigravity 账号且希望自动最大化利用配额时。 +- 当主账号配额耗尽,需要**无感切换**(不重启会话)到备用账号时。 +- 当需要实时可视化监控所有账号状态和轮换历史时。 + +## 🛠️ 快速部署流程 (Quick Start) + +### 1. 环境初始化 (必须执行) +进入 Skill 目录并运行 setup 脚本: +```bash +cd skills/antigravity-rotator +node index.js --action=setup +``` +> **作用**:自动探测 `openclaw` 和 `node` 路径,并生成适配你系统的 `config.json`。 + +### 2. 启动管理看板 +```bash +node index.js --action=dashboard +``` +- **地址**:`http://localhost:18090` +- **初始化账号**:进入页面点击右上角 **“同步凭证”**,脚本会自动扫描并加载你已通过 `openclaw models auth login` 登录的账号。 + +### 3. 配置定时任务 (Cron) +为了让轮换全自动运行,必须在系统 `crontab` 中配置驱动: +```cron +# 每 10 分钟自动检查一次 +*/10 * * * * [NODE_PATH] [SKILL_PATH]/index.js --action=rotate >> [LOG_PATH]/cron-rotate.log 2>&1 +``` +*注:具体的路径请参考 `node index.js --action=setup` 运行后的输出结果。* + +## 📝 核心配置项详解 (`config.json`) + +| 参数 | 类型 | 说明 | +| :--- | :--- | :--- | +| `openclawBin` | String | **关键**。`openclaw` 的绝对路径。 | +| `modelPriority` | Array | 轮换优先级列表。排在前面的模型会被优先尝试。 | +| `quotas.low` | Number | 触发轮换的余量百分比阈值(建议 21)。 | +| `clientId` | String | (高级) Google OAuth 客户端 ID。默认为 Antigravity 通用 ID。 | +| `clientSecret` | String | (高级) Google OAuth 客户端密钥。 | +| `defaultProjectId` | String | (高级) Google 项目 ID,影响配额查询接口。 | + +## 🌟 核心特性 +- **会话热更新**:利用 OpenClaw Gateway API,在后台悄悄更换模型,用户正在进行的对话完全不受影响。 +- **自动 Token 刷新**:内置 Token 刷新逻辑,确保长期运行无需手动重新登录。 +- **模型激活 (Warmup)**:自动识别并激活“满血”但在计时外的模型,消除初次切换的延迟。 +- **透明化日志**:看板实时展示轮换原因(如:调度更优模型、当前余量不足等)。 + +## 🤖 开发者资源 +- **入口**: `index.js` +- **逻辑引擎**: `scripts/rotator.js` (配额查询与账号调度) +- **Web UI**: `scripts/dashboard.js` (基于 http 模块的极简服务器) +- **模板**: `assets/` 文件夹下包含详细的 JSON 模板和 Cron 示例。 + +--- +*Antigravity Rotator - 你的 Antigravity 永不宕机* 🥵 diff --git a/Skills/new-skills/antigravity-rotator/_meta.json b/Skills/new-skills/antigravity-rotator/_meta.json new file mode 100644 index 0000000..0b5dfd5 --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/_meta.json @@ -0,0 +1,11 @@ +{ + "owner": "chocomintx", + "slug": "antigravity-rotator", + "displayName": "Publish Antigravity Rotator", + "latest": { + "version": "1.1.1", + "publishedAt": 1770417629152, + "commit": "https://github.com/openclaw/skills/commit/b8ddbef940e7c2005bb8cddba0c6410f4decf91a" + }, + "history": [] +} diff --git a/Skills/new-skills/antigravity-rotator/assets/config.example.json b/Skills/new-skills/antigravity-rotator/assets/config.example.json new file mode 100644 index 0000000..b1c391d --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/assets/config.example.json @@ -0,0 +1,39 @@ +{ + "//_comment_1": "【必填】你的 openclaw 可执行文件路径。运行 'node index.js --action=setup' 可尝试自动获取。", + "openclawBin": "/usr/local/bin/openclaw", + + "//_comment_2": "【可选】仪表盘访问端口,默认 18090。", + "dashboardPort": 18090, + + "//_comment_3": "【必填】模型优先级列表,从上往下优先级递减。", + "modelPriority": [ + "google-antigravity/claude-sonnet-4-5", + "google-antigravity/gemini-3-flash", + "google-antigravity/gemini-3-pro-low", + "google-antigravity/gemini-3-pro-high", + "google-antigravity/claude-opus-4-5-thinking", + "google-antigravity/claude-sonnet-4-5-thinking" + ], + + "//_comment_4": "【监控账号】可以通过看板‘同步凭证’自动发现,也可以在此手动填入 Gmail 地址。", + "accounts": [], + + "//_comment_5": "【高级配置】Google OAuth 凭证。通常无需修改,除非你有自己的项目。", + "clientId": "YOUR_CLIENT_ID_HERE.apps.googleusercontent.com", + "clientSecret": "GOCSPX-YOUR_CLIENT_SECRET_HERE", + "defaultProjectId": "YOUR_PROJECT_ID", + + "//_comment_6": "【阈值设置】当模型配额低于此百分比时触发轮换。", + "quotas": { + "low": 21 + }, + + "//_comment_7": "【路径配置】通常无需修改。系统会自动在你的 .openclaw 目录下寻找文件。", + "paths": { + "home": null, + "authProfiles": ".openclaw/agents/main/agent/auth-profiles.json", + "statusDb": ".openclaw/workspace/memory/model-status.json", + "rotationLog": ".openclaw/workspace/memory/rotation.log", + "rotationState": ".openclaw/workspace/memory/rotation-state.json" + } +} diff --git a/Skills/new-skills/antigravity-rotator/assets/crontab.sample.txt b/Skills/new-skills/antigravity-rotator/assets/crontab.sample.txt new file mode 100644 index 0000000..02b508f --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/assets/crontab.sample.txt @@ -0,0 +1 @@ +*/10 * * * * /home/chocomint/.nvm/versions/node/v24.13.0/bin/node /home/chocomint/.openclaw/workspace/skills/antigravity-rotator/index.js --action=rotate >> /tmp/antigravity-rotate.log 2>&1 diff --git a/Skills/new-skills/antigravity-rotator/index.js b/Skills/new-skills/antigravity-rotator/index.js new file mode 100644 index 0000000..b477c10 --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/index.js @@ -0,0 +1,68 @@ +#!/usr/bin/env node +const fs = require('fs'); +const path = require('path'); +const Rotator = require('./scripts/rotator'); +const Dashboard = require('./scripts/dashboard'); + +// 1. Load Config +const configPath = path.resolve(__dirname, 'config.json'); +const exampleConfigPath = path.resolve(__dirname, 'assets/config.example.json'); + +if (!fs.existsSync(configPath)) { + if (fs.existsSync(exampleConfigPath)) { + console.log("ℹ️ Config file not found. Creating default config from example..."); + fs.copyFileSync(exampleConfigPath, configPath); + } else { + console.error("❌ config.json not found and no example available."); + process.exit(1); + } +} + +const config = JSON.parse(fs.readFileSync(configPath, 'utf8')); + +// 2. Parse Args +const args = process.argv.slice(2); +let action = 'rotate'; // default + +args.forEach(arg => { + if (arg.startsWith('--action=')) { + action = arg.split('=')[1]; + } +}); + +// 3. Execute +(async () => { + if (action === 'setup') { + console.log("🛠️ Antigravity Rotator Setup Helper"); + const { execSync } = require('child_process'); + try { + const openclawPath = execSync('which openclaw', { encoding: 'utf8' }).trim(); + const nodePath = process.execPath; + console.log(`\nFound openclaw at: ${openclawPath}`); + console.log(`Found node at: ${nodePath}`); + + config.openclawBin = openclawPath; + fs.writeFileSync(configPath, JSON.stringify(config, null, 2)); + + console.log("\n✅ config.json has been updated with your local paths."); + console.log("\nNext steps:"); + console.log(`1. Run dashboard: node ${path.relative(process.cwd(), __filename)} --action=dashboard`); + console.log(`2. Setup cron (recommended):`); + console.log(` */10 * * * * ${nodePath} ${path.resolve(__filename)} --action=rotate >> /tmp/antigravity-rotate.log 2>&1`); + } catch (e) { + console.error("\n❌ Could not automatically find openclaw. Please set 'openclawBin' in config.json manually."); + } + return; + } + + if (action === 'rotate') { + const rotator = new Rotator(config); + await rotator.run(); + } else if (action === 'dashboard') { + const dashboard = new Dashboard(config); + dashboard.start(); + } else { + console.error(`Unknown action: ${action}`); + console.log("Available actions: --action=rotate, --action=dashboard"); + } +})(); diff --git a/Skills/new-skills/antigravity-rotator/package.json b/Skills/new-skills/antigravity-rotator/package.json new file mode 100644 index 0000000..18be6ba --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/package.json @@ -0,0 +1,8 @@ +{ + "name": "antigravity-rotator", + "version": "1.1.0", + "description": "Google Antigravity 模型自动轮换引擎与看板。支持多账号余额监控、优先级调度、VIP 自动热更以及赛博朋克风格仪表盘。", + "main": "index.js", + "author": "Antigravity", + "license": "MIT" +} diff --git a/Skills/new-skills/antigravity-rotator/scripts/dashboard.js b/Skills/new-skills/antigravity-rotator/scripts/dashboard.js new file mode 100644 index 0000000..5904036 --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/scripts/dashboard.js @@ -0,0 +1,416 @@ +const fs = require('fs'); +const http = require('http'); +const path = require('path'); +const { exec, execSync } = require('child_process'); + +class Dashboard { + constructor(config) { + this.config = config; + this.configPath = path.resolve(__dirname, '../config.json'); + this.home = config.paths.home || process.env.HOME; + + this.paths = { + authProfiles: path.resolve(this.home, config.paths.authProfiles), + statusDb: path.resolve(this.home, config.paths.statusDb), + rotationLog: path.resolve(this.home, config.paths.rotationLog), + rotationState: path.resolve(this.home, config.paths.rotationState) + }; + + // Path handling + this.openclawBin = config.openclawBin; + if (!this.openclawBin) { + try { this.openclawBin = execSync('which openclaw', { encoding: 'utf8' }).trim(); } catch(e) {} + } + + // Inject PATH for node/openclaw compatibility + const nodeBinPath = path.dirname(process.execPath); + process.env.PATH = nodeBinPath + ':' + process.env.PATH; + + this.getCoreModels = () => { + const priority = this.config.modelPriority || []; + return priority.slice(0, 3); + }; + } + + readJson(p) { try { return JSON.parse(fs.readFileSync(p, 'utf8')); } catch { return {}; } } + writeJson(p, d) { fs.writeFileSync(p, JSON.stringify(d, null, 2)); } + + getActiveAccount() { + try { + const authData = this.readJson(this.paths.authProfiles); + const vip = authData.profiles?.['google-antigravity:vip_rotation']; + if (!vip?.access) return null; + for (const acc of this.config.accounts) { + const realKey = `google-antigravity:${acc}`; + if (authData.profiles?.[realKey]?.access === vip.access) return acc; + } + } catch (e) { } + return null; + } + + getRotationLogs() { + try { + const content = fs.readFileSync(this.paths.rotationLog, 'utf8'); + const lines = content.trim().split('\n').slice(-100).reverse(); + return lines.map(line => { + const match = line.match(/\[(.*?)\] (.*)/); + if (match) { + let time = match[1]; + let msg = match[2]; + try { if (time.includes('T') || time.includes('-')) time = new Date(time).toLocaleTimeString('zh-CN'); } catch(e) {} + return { time, message: msg }; + } + return { time: '', message: line }; + }); + } catch (e) { return []; } + } + + async getDetailedLog() { + try { + const cronLog = path.join(this.home, '.openclaw/workspace/memory/cron-rotate.log'); + if (fs.existsSync(cronLog)) { + const content = fs.readFileSync(cronLog, 'utf8'); + const lastPart = content.slice(-20000); + const sections = lastPart.split(/=== (余量查询 & 自动轮换|Antigravity Rotator Engine)/); + if (sections.length > 1) return '=== ' + sections.pop(); + return lastPart; + } + } catch (e) {} + return '暂无详细日志数据。'; + } + + formatTimeLeft(resetAt) { + const now = Date.now(); + const diff = resetAt - now; + if (diff <= 0) return '已满'; + const hours = Math.floor(diff / (1000 * 60 * 60)); + const mins = Math.floor((diff % (1000 * 60 * 60)) / (1000 * 60)); + return hours > 0 ? `${hours}时 ${mins}分` : `${mins}分钟`; + } + + generateHTML() { + const statusDb = this.readJson(this.paths.statusDb); + const logs = this.getRotationLogs(); + const now = new Date().toLocaleString('zh-CN', { timeZone: 'Asia/Shanghai' }); + + let activeAccount = null; + let activeModel = null; + if (this.openclawBin) { + try { + const statusJson = execSync(`${this.openclawBin} gateway call status --params "{}" --json`, { encoding: 'utf8' }); + const realStatus = JSON.parse(statusJson); + const mainSession = realStatus.sessions?.recent?.find(s => s.key === 'agent:main:main'); + activeModel = mainSession?.model; + } catch (e) {} + } + + activeAccount = this.getActiveAccount(); + const coreModelKeys = this.getCoreModels(); + const allModelKeys = this.config.modelPriority || []; + + let accountCards = ''; + for (const acc of this.config.accounts) { + if (!acc || typeof acc !== 'string' || !acc.includes('@')) continue; + const isActive = acc === activeAccount; + const shortName = acc.split('@')[0]; + let modelsHtml = ''; + let lastUpdated = 0; + + for (const modelKey of coreModelKeys) { + const key = `${acc}:${modelKey}`; + const info = statusDb[key]; + if (!info) continue; + const quota = info.quota || 0; + const resetText = info.resetAt ? this.formatTimeLeft(info.resetAt) : '-'; + if (info.updatedAt > lastUpdated) lastUpdated = info.updatedAt; + let barClass = quota < 20 ? 'bar-critical' : (quota < 50 ? 'bar-warning' : 'bar-high'); + const displayName = modelKey.replace('google-antigravity/', ''); + const isModelCurrentlyActive = isActive && (activeModel === modelKey || activeModel === modelKey.split('/').pop()); + + modelsHtml += ` +
+
+ ${displayName} ${isModelCurrentlyActive ? '🔥' : ''} + ${quota}% +
+
+
重置: ${resetText}
+
`; + } + + const updateTimeStr = lastUpdated ? new Date(lastUpdated).toLocaleTimeString() : '无数据'; + accountCards += ` +
+
+ + ${isActive ? '正在使用' : ''} + +
+
${modelsHtml || '
暂无配额数据
'}
+ +
`; + } + + let priorityHtml = allModelKeys.map((m, i) => { + const displayName = m.replace('google-antigravity/', ''); + return ` +
+ ${i + 1}. ${displayName} +
+ + +
+
`; + }).join(''); + + let logsHtml = logs.map(l => ` +
+ ${l.time || ''} + ${l.message} + +
+ `).join(''); + + return ` + + + + + + Antigravity 矩阵看板 + + + +
+
+
+
Antigravity 矩阵看板
+
● 在线
+
+
+ ${now} + + +
+
+ +
${accountCards}
+ +
+ + + +
+
+ + + + + + + +`; + } + + start() { + const port = this.config.dashboardPort || 18090; + const server = http.createServer(async (req, res) => { + if (req.url === '/' && req.method === 'GET') { + res.writeHead(200, { 'Content-Type': 'text/html; charset=utf-8' }); + res.end(this.generateHTML()); + } else if (req.url === '/api' && req.method === 'POST') { + let body = ''; + req.on('data', chunk => body += chunk); + req.on('end', async () => { + try { + const data = JSON.parse(body); + const result = await this.handleApi(data); + res.writeHead(200, { 'Content-Type': 'application/json' }); + res.end(JSON.stringify(result)); + } catch (e) { + res.writeHead(400); + res.end(JSON.stringify({ success: false, message: e.message })); + } + }); + } else { + res.writeHead(404); + res.end('Not Found'); + } + }); + server.listen(port, '0.0.0.0', () => { console.log(`\n🔮 Antigravity Dashboard started on http://0.0.0.0:${port}`); }); + } + + async handleApi(data) { + const { action } = data; + let changed = false; + const currentConfig = this.readJson(this.configPath); + + if (action === 'addAccount') { + if (!currentConfig.accounts.includes(data.email)) { currentConfig.accounts.push(data.email); changed = true; } + } else if (action === 'removeAccount') { + currentConfig.accounts = currentConfig.accounts.filter(a => a !== data.email); changed = true; + } else if (action === 'syncAccounts') { + const authData = this.readJson(this.paths.authProfiles); + const antigravityAccounts = Object.keys(authData.profiles || {}).filter(k => k.startsWith('google-antigravity:')).map(k => k.replace('google-antigravity:', '')); + let added = 0; + for (const email of antigravityAccounts) { if (!currentConfig.accounts.includes(email)) { currentConfig.accounts.push(email); added++; changed = true; } } + if (added > 0) { this.writeJson(this.configPath, currentConfig); this.config = currentConfig; } + return { success: true, added, accounts: antigravityAccounts }; + } else if (action === 'movePriority') { + const { index, direction } = data; + const newIndex = index + direction; + if (newIndex >= 0 && newIndex < currentConfig.modelPriority.length) { + const item = currentConfig.modelPriority.splice(index, 1)[0]; + currentConfig.modelPriority.splice(newIndex, 0, item); changed = true; + } + } else if (action === 'setPriority') { + if (Array.isArray(data.order) && data.order.length > 0) { currentConfig.modelPriority = data.order; changed = true; } + } else if (action === 'triggerRotate') { + const indexPath = path.resolve(__dirname, '../index.js'); + exec(`node ${indexPath} --action=rotate`, (err, stdout) => { console.log('Manual rotation output:', stdout); }); + return { success: true, message: 'Rotation triggered' }; + } else if (action === 'getDetailedLog') { + const log = await this.getDetailedLog(); + return { success: true, log }; + } + + if (changed) { this.writeJson(this.configPath, currentConfig); this.config = currentConfig; return { success: true }; } + return { success: false, message: 'No action taken' }; + } +} + +module.exports = Dashboard; diff --git a/Skills/new-skills/antigravity-rotator/scripts/rotator.js b/Skills/new-skills/antigravity-rotator/scripts/rotator.js new file mode 100644 index 0000000..38b0da2 --- /dev/null +++ b/Skills/new-skills/antigravity-rotator/scripts/rotator.js @@ -0,0 +1,243 @@ +const fs = require('fs'); +const path = require('path'); +const { execSync } = require('child_process'); + +class Rotator { + constructor(config) { + this.config = config; + this.home = config.paths.home || process.env.HOME; + + this.paths = { + authProfiles: path.resolve(this.home, config.paths.authProfiles), + statusDb: path.resolve(this.home, config.paths.statusDb), + rotationLog: path.resolve(this.home, config.paths.rotationLog), + rotationState: path.resolve(this.home, config.paths.rotationState), + dashboardConfig: path.resolve(__dirname, '../config.json') + }; + + // API Configuration (configurable via config.json) + this.QUOTA_API_URL = 'https://daily-cloudcode-pa.sandbox.googleapis.com/v1internal:fetchAvailableModels'; + this.REFRESH_TOKEN_URL = 'https://oauth2.googleapis.com/token'; + + // OAuth Credentials (from config or hardcoded Antigravity defaults) + this.CLIENT_ID = config.clientId || 'YOUR_CLIENT_ID_HERE.apps.googleusercontent.com'; + this.CLIENT_SECRET = config.clientSecret || 'GOCSPX-YOUR_CLIENT_SECRET_HERE'; + this.DEFAULT_PROJECT_ID = config.defaultProjectId || 'YOUR_PROJECT_ID'; + + this.VIP_KEY = 'google-antigravity:vip_rotation'; + this.threshold = (config.quotas && config.quotas.low) || 21; + + // Path handling + this.openclawBin = config.openclawBin; + if (!this.openclawBin) { + try { this.openclawBin = execSync('which openclaw', { encoding: 'utf8' }).trim(); } catch(e) {} + } + + // Inject PATH for node/openclaw compatibility + const nodeBinPath = path.dirname(process.execPath); + process.env.PATH = nodeBinPath + ':' + process.env.PATH; + } + + readJson(p) { try { return JSON.parse(fs.readFileSync(p, 'utf8')); } catch { return {}; } } + writeJson(p, d) { fs.writeFileSync(p, JSON.stringify(d, null, 2)); } + appendLog(msg) { + try { fs.appendFileSync(this.paths.rotationLog, `[${new Date().toISOString()}] ${msg}\n`, 'utf8'); } catch { } + } + shortEmail(email) { return email.split('@')[0]; } + + async refreshAccessToken(refreshToken) { + const postData = new URLSearchParams({ + client_id: this.CLIENT_ID, + client_secret: this.CLIENT_SECRET, + refresh_token: refreshToken, + grant_type: 'refresh_token' + }).toString(); + const cmd = `curl -s --connect-timeout 10 --retry 1 -X POST "${this.REFRESH_TOKEN_URL}" -d "${postData}"`; + try { + const output = execSync(cmd, { encoding: 'utf8', timeout: 35000 }); + const json = JSON.parse(output); + if (json.access_token) return json.access_token; + throw new Error(json.error_description || 'Token refresh failed'); + } catch (e) { throw new Error(`Refresh Failed: ${e.message}`); } + } + + async fetchAccountQuota(accessToken, projectId) { + const headers = { 'Authorization': `Bearer ${accessToken}`, 'Content-Type': 'application/json', 'User-Agent': 'antigravity/1.15.8 linux/x64' }; + const body = { project: projectId || this.DEFAULT_PROJECT_ID }; + const headerArgs = Object.entries(headers).map(([k, v]) => `-H "${k}: ${v}"`).join(' '); + const bodyStr = JSON.stringify(body).replace(/"/g, '\\"'); + const cmd = `curl -s --connect-timeout 10 --retry 1 -X POST "${this.QUOTA_API_URL}" ${headerArgs} -d "${bodyStr}"`; + try { + const output = execSync(cmd, { encoding: 'utf8', timeout: 35000 }); + if (!output.trim()) throw new Error('Empty response'); + const data = JSON.parse(output); + if (data.error) { + if (data.error.code === 403) return { forbidden: true }; + throw new Error(data.error.message || data.error.status); + } + const quotas = {}; + for (const [mId, info] of Object.entries(data.models || {})) { + const fullKey = `google-antigravity/${mId}`; + const qI = info.quotaInfo; + const rT = qI && qI.resetTime ? new Date(qI.resetTime).getTime() : 0; + let pct = 100; + if (qI) { + if (qI.remainingFraction !== undefined) pct = Math.round(qI.remainingFraction * 100); + else if (rT > Date.now()) pct = 0; + } + quotas[fullKey] = { quota: pct, resetAt: rT, updatedAt: Date.now() }; + } + return quotas; + } catch (e) { throw new Error(`Quota Fetch Failed: ${e.message}`); } + } + + async run() { + if (!this.openclawBin) { console.error('❌ openclaw binary not found. Please run "node index.js --action=setup".'); return; } + console.log(`=== Antigravity Rotator Engine [${new Date().toLocaleTimeString()}] ===\n`); + const authData = this.readJson(this.paths.authProfiles); + const modelPriority = this.config.modelPriority || []; + const accounts = this.config.accounts || []; + if (accounts.length === 0) { console.log('⚠️ No accounts.'); return; } + + console.log(`📋 Accounts: ${accounts.map(a => this.shortEmail(a)).join(', ')}`); + console.log(`📋 Priority: ${modelPriority.map(m => m.split('/').pop()).join(' > ')}\n`); + + let currentModel = null; + try { + const statusOutput = execSync(`${this.openclawBin} gateway call status --params "{}" --json`, { encoding: 'utf8' }); + const status = JSON.parse(statusOutput); + currentModel = status.sessions?.recent?.find(s => s.key === 'agent:main:main')?.model || status.sessions?.defaults?.model; + if (currentModel) console.log(`📡 Current Session Model: ${currentModel}`); + } catch (e) { } + + if (!currentModel) currentModel = modelPriority[0]; + if (!currentModel.includes('/')) { + const full = `google-antigravity/${currentModel}`; + if (modelPriority.includes(full)) currentModel = full; + } + + const currentVip = authData.profiles?.[this.VIP_KEY]; + let currentEmail = accounts[0]; + for (const acc of accounts) { + if (authData.profiles[`google-antigravity:${acc}`]?.access === currentVip?.access) { + currentEmail = acc; break; + } + } + + if (!currentModel.startsWith('google-antigravity/')) { + console.log(`ℹ️ Non-Antigravity model (${currentModel}). Skipping.`); return; + } + + console.log(`Current: ${this.shortEmail(currentEmail)} | ${currentModel}\n`); + + const statusDb = this.readJson(this.paths.statusDb) || {}; + let success = 0; + let authUpdated = false; + + for (const email of accounts) { + const profile = authData.profiles?.[`google-antigravity:${email}`]; + if (!profile) continue; + try { + let token = profile.access; + const now = Date.now(); + if (!profile.expires || profile.expires < now + 300000) { + console.log(`🔄 ${this.shortEmail(email)}: Refreshing token...`); + token = await this.refreshAccessToken(profile.refresh); + authData.profiles[`google-antigravity:${email}`].access = token; + authData.profiles[`google-antigravity:${email}`].expires = now + 3600000; + if (currentVip && currentVip.refresh === profile.refresh) { + authData.profiles[this.VIP_KEY].access = token; + authData.profiles[this.VIP_KEY].expires = now + 3600000; + } + authUpdated = true; + } + console.log(`📡 ${this.shortEmail(email)}: Fetching quotas...`); + const quotas = await this.fetchAccountQuota(token, profile.projectId); + if (quotas.forbidden) continue; + for (const [m, d] of Object.entries(quotas)) { + statusDb[`${email}:${m}`] = d; + console.log(` ${m.split('/').pop()}: ${d.quota}%`); + } + success++; + } catch (e) { console.log(`❌ ${this.shortEmail(email)}: ${e.message}`); } + } + + if (authUpdated) this.writeJson(this.paths.authProfiles, authData); + this.writeJson(this.paths.statusDb, statusDb); + if (success === 0) return; + + let choice = null; + const now = Date.now(); + for (const model of modelPriority) { + const startIdx = accounts.indexOf(currentEmail); + for (let i = 0; i < accounts.length; i++) { + const acc = accounts[(Math.max(0, startIdx) + i) % accounts.length]; + const info = statusDb[`${acc}:${model}`]; + if (!info) continue; + if (info.quota >= this.threshold || (info.resetAt && now > info.resetAt)) { + if (acc === currentEmail && model === currentModel) choice = { email: acc, model, reason: 'Maintain' }; + else if (modelPriority.indexOf(model) < modelPriority.indexOf(currentModel)) choice = { email: acc, model, reason: 'Upgrade Model' }; + else if ((statusDb[`${currentEmail}:${currentModel}`]?.quota || 0) < this.threshold) choice = { email: acc, model, reason: 'Quota Low' }; + if (choice) break; + } + } + if (choice) break; + } + + if (!choice) return; + if (choice.reason === 'Maintain') { + console.log("\n✅ Status Quo: Maintaining current setup."); + } else { + this.performRotation(authData, choice, currentEmail, currentModel); + } + await this.warmup(statusDb, authData, modelPriority, accounts); + } + + performRotation(authData, choice, currentEmail, currentModel) { + const nextKey = `google-antigravity:${choice.email}`; + authData.profiles[this.VIP_KEY] = { ...authData.profiles[nextKey], email: 'vip_rotation_active' }; + this.writeJson(this.paths.authProfiles, authData); + + try { execSync(`${this.openclawBin} models set ${choice.model}`); } catch (e) { } + try { + const patch = JSON.stringify({ key: 'agent:main:main', model: choice.model }); + execSync(`${this.openclawBin} gateway call sessions.patch --params '${patch}'`); + } catch (e) { } + + this.writeJson(this.paths.rotationState, { + lastRotation: Date.now(), previousAccount: currentEmail, newAccount: choice.email, + previousModel: currentModel, newModel: choice.model, pendingNotification: true, reason: choice.reason + }); + + const zhReason = { 'Maintain': '维持', 'Upgrade Model': '调度更高优先级', 'Quota Low': '余量过低' }[choice.reason] || choice.reason; + this.appendLog(`✅ 已轮换账号:${this.shortEmail(choice.email)} | 模型:${choice.model.split('/').pop()} | 原因:${zhReason}`); + console.log(`\n✅ Rotation: ${this.shortEmail(currentEmail)} → ${this.shortEmail(choice.email)} (${zhReason})`); + } + + async warmup(statusDb, authData, modelPriority, accounts) { + const TOP = modelPriority.slice(0, 3); + const toW = []; + for (const acc of accounts) { + for (const m of TOP) { + const info = statusDb[`${acc}:${m}`]; + if (info && info.quota >= 100 && !info.resetAt) toW.push({ acc, m }); + } + } + if (toW.length === 0) return; + console.log(`\n🔥 Warming up ${toW.length} models...`); + for (const { acc, m } of toW) { + try { + const originalVip = authData.profiles[this.VIP_KEY]; + authData.profiles[this.VIP_KEY] = { ...authData.profiles[`google-antigravity:${acc}`], email: 'warmup_temp' }; + this.writeJson(this.paths.authProfiles, authData); + const sId = `warmup-${Date.now()}`; + execSync(`${this.openclawBin} gateway call sessions.patch --params '{"key":"agent:main:${sId}","model":"${m}"}'`); + execSync(`timeout 10 ${this.openclawBin} agent --session-id ${sId} --message "1" --json 2>/dev/null || true`); + if (originalVip) { authData.profiles[this.VIP_KEY] = originalVip; this.writeJson(this.paths.authProfiles, authData); } + console.log(` ✅ ${this.shortEmail(acc)} / ${m.split('/').pop()}`); + } catch (e) { } + } + } +} + +module.exports = Rotator; diff --git a/Skills/new-skills/antigravity-skill-orchestrator/README.md b/Skills/new-skills/antigravity-skill-orchestrator/README.md new file mode 100644 index 0000000..c1fb179 --- /dev/null +++ b/Skills/new-skills/antigravity-skill-orchestrator/README.md @@ -0,0 +1,32 @@ +# antigravity-skill-orchestrator + +A meta-skill package for the Antigravity IDE ecosystem. + +## Overview + +The `antigravity-skill-orchestrator` is an intelligent meta-skill that enhances an AI agent's ability to handle complex, multi-domain tasks. It provides strict guidelines and workflows enabling the agent to: + +1. **Evaluate Task Complexity**: Implementing guardrails to prevent the overuse of specialized skills on simple, straightforward tasks. +2. **Dynamically Select Skills**: Identifying the best combination of skills for a given complex problem. +3. **Track Skill Combinations**: Utilizing the `agent-memory-mcp` skill to store, search, and retrieve successful skill combinations for future reference, building institutional knowledge over time. + +## Installation + +This skill is designed to be used within the Antigravity IDE and integrated alongside the existing suite of AWESOME skills. + +Make sure you have the `agent-memory-mcp` skill installed and running to take full advantage of the combination tracking feature. + +## Usage + +When executing a prompt with an AI assistant via the Antigravity IDE, you can invoke this skill: + +```bash +@antigravity-skill-orchestrator Please build a comprehensive dashboard integrating fetching live data, an interactive UI, and performance optimizations. +``` + +The agent will then follow the directives in the `SKILL.md` to break down the task, search memory for similar challenges, assemble the right team of skills (e.g., `@react-patterns` + `@nodejs-backend-patterns`), and execute the task without over-complicating it. + +--- + +**Author:** [Wahid](https://github.com/wahidzzz) +**Source:** [antigravity-skill-orchestrator](https://github.com/wahidzzz/antigravity-skill-orchestrator) diff --git a/Skills/new-skills/antigravity-skill-orchestrator/SKILL.md b/Skills/new-skills/antigravity-skill-orchestrator/SKILL.md new file mode 100644 index 0000000..7bc1ebd --- /dev/null +++ b/Skills/new-skills/antigravity-skill-orchestrator/SKILL.md @@ -0,0 +1,123 @@ +--- +name: antigravity-skill-orchestrator +description: "A meta-skill that understands task requirements, dynamically selects appropriate skills, tracks successful skill combinations using agent-memory-mcp, and prevents skill overuse for simple tasks." +category: meta +risk: safe +source: community +tags: "[orchestration, meta-skill, agent-memory, task-evaluation]" +date_added: "2026-03-13" +--- + +# antigravity-skill-orchestrator + +## Overview + +The `skill-orchestrator` is a meta-skill designed to enhance the AI agent's ability to tackle complex problems. It acts as an intelligent coordinator that first evaluates the complexity of a user's request. Based on that evaluation, it determines if specialized skills are needed. If they are, it selects the right combination of skills, explicitly tracks these combinations using `@agent-memory-mcp` for future reference, and guides the agent through the execution process. Crucially, it includes strict guardrails to prevent the unnecessary use of specialized skills for simple tasks that can be solved with baseline capabilities. + +## When to Use This Skill + +- Use when tackling a complex, multi-step problem that likely requires multiple domains of expertise. +- Use when you are unsure which specific skills are best suited for a given user request, and need to discover them from the broader ecosystem. +- Use when the user explicitly asks to "orchestrate", "combine skills", or "use the best tools for the job" on a significant task. +- Use when you want to look up previously successful combinations of skills for a specific type of problem. + +## Core Concepts + +### Task Evaluation Guardrails +Not every task requires a specialized skill. For straightforward issues (e.g., small CSS fixes, simple script writing, renaming a variable), **DO NOT USE** specialized skills. Over-engineering simple tasks wastes tokens and time. + +Additionally, the orchestrator is strictly forbidden from creating new skills. Its sole purpose is to combine and use existing skills provided by the community or present in the current environment. + +Before invoking any skills, evaluate the task: +1. **Is the task simple/contained?** Solve it directly using the agent's ordinary file editing, search, and terminal capabilities available in the current environment. +2. **Is the task complex/multi-domain?** Only then should you proceed to orchestrate skills. + +### Skill Selection & Combinations +When a task is deemed complex, identify the necessary domains (e.g., frontend, database, deployment). Search available skills in the current environment to find the most relevant ones. If the required skills are not found locally, consult the master skill catalog. + +### Master Skill Catalog +The Antigravity ecosystem maintains a master catalog of highly curated skills at `https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/CATALOG.md`. When local skills are insufficient, fetch this catalog to discover appropriate skills across the 9 primary categories: +- `architecture` +- `business` +- `data-ai` +- `development` +- `general` +- `infrastructure` +- `security` +- `testing` +- `workflow` + +### Memory Integration (`@agent-memory-mcp`) +To build institutional knowledge, the orchestrator relies on the `agent-memory-mcp` skill to record and retrieve successful skill combinations. + +## Step-by-Step Guide + +### 1. Task Evaluation & Guardrail Check +[Triggered when facing a new user request that might need skills] +1. Read the user's request. +2. Ask yourself: "Can I solve this efficiently with just basic file editing and terminal commands?" +3. If YES: Proceed without invoking specialized skills. Stop the orchestration here. +4. If NO: Proceed to step 2. + +### 2. Retrieve Past Knowledge +[Triggered if the task is complex] +1. Use the `memory_search` tool provided by `agent-memory-mcp` to search for similar past tasks. + - Example query: `memory_search({ query: "skill combination for react native and firebase", type: "skill_combination" })` +2. If a working combination exists, read the details using `memory_read`. +3. If no relevant memory exists, proceed to Step 3. + +### 3. Discover and Select Skills +[Triggered if no past knowledge covers this task] +1. Analyze the core requirements (e.g., "needs a React UI, a Node.js backend, and a PostgreSQL database"). +2. Query the locally available skills using the current environment's skill list or equivalent discovery mechanism to find the best match for each requirement. +3. **If local skills are insufficient**, fetch the master catalog with the web or command-line retrieval tools available in the current environment: `https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/CATALOG.md`. +4. Scan the catalog's 9 main categories to identify the appropriate skills to bring into the current context. +5. Select the minimal set of skills needed. **Do not over-select.** + +### 4. Apply Skills and Track the Combination +[Triggered after executing the task using the selected skills] +1. Assume the task was completed successfully using a new combination of skills (e.g., `@react-patterns` + `@nodejs-backend-patterns` + `@postgresql`). +2. Record this combination for future use using `memory_write` from `agent-memory-mcp`. + - Ensure the type is `skill_combination`. + - Provide a descriptive key and content detailing why these skills worked well together. + +## Examples + +### Example 1: Handling a Simple Task (The Guardrail in Action) +**User Request:** "Change the color of the submit button in `index.css` to blue." +**Action:** The skill orchestrator evaluates the task. It determines this is a "simple/contained" task. It **does not** invoke specialized skills. It directly edits `index.css`. + +### Example 2: Recording a New Skill Combination +```javascript +// Using the agent-memory-mcp tool after successfully building a complex feature +memory_write({ + key: "combination-ecommerce-checkout", + type: "skill_combination", + content: "For e-commerce checkouts, using @stripe-integration combined with @react-state-management and @postgresql effectively handles the full flow from UI state to payment processing to order recording.", + tags: ["ecommerce", "checkout", "stripe", "react"] +}) +``` + +### Example 3: Retrieving a Combination +```javascript +// At the start of a new e-commerce task +memory_search({ + query: "ecommerce checkout", + type: "skill_combination" +}) +// Returns the key "combination-ecommerce-checkout", which you then read: +memory_read({ key: "combination-ecommerce-checkout" }) +``` + +## Best Practices + +- ✅ **Do:** Always evaluate task complexity *before* looking for skills. +- ✅ **Do:** Keep the number of orchestrated skills as small as possible. +- ✅ **Do:** Use highly descriptive keys when running `memory_write` so they are easy to search later. +- ❌ **Don't:** Use this skill for simple bug fixes or UI tweaks. +- ❌ **Don't:** Combine skills that have overlapping and conflicting instructions without a clear plan to resolve the conflict. +- ❌ **Don't:** Attempt to construct, generate, or create new skills. Only combine what is available. + +## Related Skills + +- `@agent-memory-mcp` - Essential for this skill to function. Provides the persistent storage for skill combinations. diff --git a/Skills/new-skills/antigravity-workflows/SKILL.md b/Skills/new-skills/antigravity-workflows/SKILL.md new file mode 100644 index 0000000..f807395 --- /dev/null +++ b/Skills/new-skills/antigravity-workflows/SKILL.md @@ -0,0 +1,86 @@ +--- +name: antigravity-workflows +description: "Orchestrate multiple Antigravity skills through guided workflows for SaaS MVP delivery, security audits, AI agent builds, and browser QA." +risk: none +source: self +date_added: "2026-02-27" +--- + +# Antigravity Workflows + +Use this skill to turn a complex objective into a guided sequence of skill invocations. + +## When to Use This Skill + +Use this skill when: +- The user wants to combine several skills without manually selecting each one. +- The goal is multi-phase (for example: plan, build, test, ship). +- The user asks for best-practice execution for common scenarios like: + - Shipping a SaaS MVP + - Running a web security audit + - Building an AI agent system + - Implementing browser automation and E2E QA + +## Workflow Source of Truth + +Read workflows in this order: +1. `docs/WORKFLOWS.md` for human-readable playbooks. +2. `data/workflows.json` for machine-readable workflow metadata. + +## How to Run This Skill + +1. Identify the user's concrete outcome. +2. Propose the 1-2 best matching workflows. +3. Ask the user to choose one. +4. Execute step-by-step: + - Announce current step and expected artifact. + - Invoke recommended skills for that step. + - Verify completion criteria before moving to next step. +5. At the end, provide: + - Completed artifacts + - Validation evidence + - Remaining risks and next actions + +## Default Workflow Routing + +- Product delivery request -> `ship-saas-mvp` +- Security review request -> `security-audit-web-app` +- Agent/LLM product request -> `build-ai-agent-system` +- E2E/browser testing request -> `qa-browser-automation` +- Domain-driven design request -> `design-ddd-core-domain` + +## Copy-Paste Prompts + +```text +Use @antigravity-workflows to run the "Ship a SaaS MVP" workflow for my project idea. +``` + +```text +Use @antigravity-workflows and execute a full "Security Audit for a Web App" workflow. +``` + +```text +Use @antigravity-workflows to guide me through "Build an AI Agent System" with checkpoints. +``` + +```text +Use @antigravity-workflows to execute the "QA and Browser Automation" workflow and stabilize flaky tests. +``` + +```text +Use @antigravity-workflows to execute the "Design a DDD Core Domain" workflow for my new service. +``` + +## Limitations + +- This skill orchestrates; it does not replace specialized skills. +- It depends on the local availability of referenced skills. +- It does not guarantee success without environment access, credentials, or required infrastructure. +- For stack-specific browser automation in Go, `go-playwright` may require the corresponding skill to be present in your local skills repository. + +## Related Skills + +- `concise-planning` +- `brainstorming` +- `workflow-automation` +- `verification-before-completion` diff --git a/Skills/new-skills/antigravity-workflows/resources/implementation-playbook.md b/Skills/new-skills/antigravity-workflows/resources/implementation-playbook.md new file mode 100644 index 0000000..9db5deb --- /dev/null +++ b/Skills/new-skills/antigravity-workflows/resources/implementation-playbook.md @@ -0,0 +1,36 @@ +# Antigravity Workflows Implementation Playbook + +This document explains how an agent should execute workflow-based orchestration. + +## Execution Contract + +For every workflow: + +1. Confirm objective and scope. +2. Select the best-matching workflow. +3. Execute workflow steps in order. +4. Produce one concrete artifact per step. +5. Validate before continuing. + +## Step Artifact Examples + +- Plan step -> scope document or milestone checklist. +- Build step -> code changes and implementation notes. +- Test step -> test results and failure triage. +- Release step -> rollout checklist and risk log. + +## Safety Guardrails + +- Never run destructive actions without explicit user approval. +- If a required skill is missing, state the gap and fallback to closest available skill. +- When security testing is involved, ensure authorization is explicit. + +## Suggested Completion Format + +At workflow completion, return: + +1. Completed steps +2. Artifacts produced +3. Validation evidence +4. Open risks +5. Suggested next action diff --git a/Skills/new-skills/trellis-meta/SKILL.md b/Skills/new-skills/trellis-meta/SKILL.md new file mode 100644 index 0000000..5a6d1cb --- /dev/null +++ b/Skills/new-skills/trellis-meta/SKILL.md @@ -0,0 +1,468 @@ +--- +name: trellis-meta +description: | + Meta-skill for understanding and customizing Mindfold Trellis - the all-in-one AI workflow system for 9 AI coding platforms (Claude Code, Cursor, OpenCode, iFlow, Codex, Kilo, Kiro, Gemini CLI, Antigravity). This skill documents the ORIGINAL Trellis system design. When users customize their Trellis installation, modifications should be recorded in a project-local `trellis-local` skill, NOT in this meta-skill. Use this skill when: (1) understanding Trellis architecture, (2) customizing Trellis workflows, (3) adding commands/agents/hooks, (4) troubleshooting issues, or (5) adapting Trellis to specific projects. +--- + +# Trellis Meta-Skill + +## Version Compatibility + +| Item | Value | +| --------------------------- | ---------- | +| **Trellis CLI Version** | 0.3.0 | +| **Skill Last Updated** | 2026-02-28 | +| **Min Claude Code Version** | 1.0.0+ | + +> ⚠️ **Version Mismatch Warning**: If your Trellis CLI version differs from above, some features may not work as documented. Run `trellis --version` to check. + +--- + +## Platform Compatibility + +### Feature Support Matrix + +| Feature | Claude Code | iFlow | Cursor | OpenCode | Codex | Kilo | Kiro | Gemini CLI | Antigravity | +| --------------------------- | ----------- | ------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ------------ | +| **Core Systems** | | | | | | | | | | +| Workspace system | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | +| Task system | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | +| Spec system | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Full | +| Commands/Skills | ✅ Full | ✅ Full | ✅ Full | ✅ Full | ✅ Skills | ✅ Full | ✅ Skills | ✅ TOML | ✅ Workflows | +| Agent definitions | ✅ Full | ✅ Full | ⚠️ Manual | ✅ Full | ⚠️ Manual | ⚠️ Manual | ⚠️ Manual | ⚠️ Manual | ⚠️ Manual | +| **Hook-Dependent Features** | | | | | | | | | | +| SessionStart hook | ✅ Full | ✅ Full | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | +| PreToolUse hook | ✅ Full | ✅ Full | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | +| SubagentStop hook | ✅ Full | ✅ Full | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | +| Auto context injection | ✅ Full | ✅ Full | ❌ Manual | ❌ Manual | ❌ Manual | ❌ Manual | ❌ Manual | ❌ Manual | ❌ Manual | +| Ralph Loop | ✅ Full | ✅ Full | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | +| **Multi-Agent/Session** | | | | | | | | | | +| Multi-Agent (current dir) | ✅ Full | ✅ Full | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited | +| Multi-Session (worktrees) | ✅ Full | ✅ Full | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | + +### Legend + +- ✅ **Full**: Feature works as documented +- ⚠️ **Limited/Manual**: Works but requires manual steps +- ❌ **None/Manual**: Not supported or requires manual workaround + +### Platform Categories + +#### Full Hook Support (Claude Code, iFlow) + +All features work as documented. Hooks provide automatic context injection and quality enforcement. iFlow shares the same Python hook system as Claude Code. + +#### Commands Only (Cursor, OpenCode, Codex, Kilo, Kiro, Gemini CLI, Antigravity) + +- **Works**: Workspace, tasks, specs, commands/skills (platform-specific format) +- **Doesn't work**: Hooks, auto-injection, Ralph Loop, Multi-Session +- **Workaround**: Manually read spec files at session start; no automatic quality gates +- **Note**: Each platform uses its own command format (Codex uses Skills, Gemini uses TOML, Antigravity uses Workflows) + +### Designing for Portability + +When customizing Trellis, consider platform compatibility: + +``` +┌─────────────────────────────────────────────────────────────┐ +│ PORTABLE (All 9 Platforms) │ +│ - .trellis/workspace/ - .trellis/tasks/ │ +│ - .trellis/spec/ - Platform commands/skills │ +│ - File-based configs - JSONL context files │ +└─────────────────────────────────────────────────────────────┘ + │ +┌─────────────────────────────▼───────────────────────────────┐ +│ HOOK-CAPABLE (Claude Code + iFlow) │ +│ - .claude/hooks/ or .iflow/hooks/ │ +│ - settings.json hook configuration │ +│ - Auto context injection - SubagentStop control │ +│ - Ralph Loop - Multi-Session worktrees │ +└─────────────────────────────────────────────────────────────┘ +``` + +--- + +## Purpose + +This is the **meta-skill** for Trellis - it documents the original, unmodified Trellis system. When customizing Trellis for a specific project, record changes in a **project-local skill** (`trellis-local`), keeping this meta-skill as the authoritative reference for vanilla Trellis. + +## Skill Hierarchy + +``` +~/.claude/skills/ +└── trellis-meta/ # THIS SKILL - Original Trellis documentation + # ⚠️ DO NOT MODIFY for project-specific changes + +project/.claude/skills/ +└── trellis-local/ # Project-specific customizations + # ✅ Record all modifications here +``` + +**Why this separation?** + +- User may have multiple projects with different Trellis customizations +- Each project's `trellis-local` skill tracks ITS OWN modifications +- The meta-skill remains clean as the reference for original Trellis +- Enables easy upgrades: compare meta-skill with new Trellis version + +--- + +## Self-Iteration Protocol + +When modifying Trellis for a project, follow this protocol: + +### 1. Check for Existing Project Skill + +```bash +# Look for project-local skill +ls -la .claude/skills/trellis-local/ +``` + +### 2. Create Project Skill if Missing + +If no `trellis-local` exists, create it: + +```bash +mkdir -p .claude/skills/trellis-local +``` + +Then create `.claude/skills/trellis-local/SKILL.md`: + +```markdown +--- +name: trellis-local +description: | + Project-specific Trellis customizations for [PROJECT_NAME]. + This skill documents modifications made to the vanilla Trellis system + in this project. Inherits from trellis-meta for base documentation. +--- + +# Trellis Local - [PROJECT_NAME] + +## Base Version + +Trellis version: X.X.X (from package.json or trellis --version) +Date initialized: YYYY-MM-DD + +## Customizations + +### Commands Added + +(none yet) + +### Agents Modified + +(none yet) + +### Hooks Changed + +(none yet) + +### Specs Customized + +(none yet) + +### Workflow Changes + +(none yet) + +--- + +## Changelog + +### YYYY-MM-DD + +- Initial setup +``` + +### 3. Record Every Modification + +When making ANY change to Trellis, update `trellis-local/SKILL.md`: + +#### Example: Adding a new command + +```markdown +### Commands Added + +#### /trellis:my-command + +- **File**: `.claude/commands/trellis/my-command.md` +- **Purpose**: [what it does] +- **Added**: 2026-01-31 +- **Why**: [reason for adding] +``` + +#### Example: Modifying a hook + +```markdown +### Hooks Changed + +#### inject-subagent-context.py + +- **Change**: Added support for `my-agent` type +- **Lines modified**: 45-67 +- **Date**: 2026-01-31 +- **Why**: [reason] +``` + +### 4. Never Modify Meta-Skill for Project Changes + +The `trellis-meta` skill should ONLY be updated when: + +- Trellis releases a new version +- Fixing documentation errors in the original +- Adding missing documentation for original features + +--- + +## Architecture Overview + +Trellis transforms AI assistants into structured development partners through **enforced context injection**. + +### System Layers + +``` +┌─────────────────────────────────────────────────────────────────────┐ +│ USER INTERACTION │ +│ /trellis:start /trellis:brainstorm /trellis:parallel /trellis:finish-work │ +└─────────────────────────────────┬───────────────────────────────────┘ + │ +┌─────────────────────────────────▼───────────────────────────────────┐ +│ SKILLS LAYER │ +│ .claude/commands/trellis/*.md (slash commands) │ +│ .claude/agents/*.md (sub-agent definitions) │ +└─────────────────────────────────┬───────────────────────────────────┘ + │ +┌─────────────────────────────────▼───────────────────────────────────┐ +│ HOOKS LAYER │ +│ SessionStart → session-start.py (injects workflow + context) │ +│ PreToolUse:Task → inject-subagent-context.py (spec injection) │ +│ SubagentStop → ralph-loop.py (quality enforcement) │ +└─────────────────────────────────┬───────────────────────────────────┘ + │ +┌─────────────────────────────────▼───────────────────────────────────┐ +│ PERSISTENCE LAYER │ +│ .trellis/workspace/ (journals, session history) │ +│ .trellis/tasks/ (task tracking, context files) │ +│ .trellis/spec/ (coding guidelines) │ +└─────────────────────────────────────────────────────────────────────┘ +``` + +### Key Design Principles + +| Principle | Description | +| ---------------------------------- | --------------------------------------------------- | +| **Specs Injected, Not Remembered** | Hooks enforce specs - agents always receive context | +| **Read Before Write** | Understand guidelines before writing code | +| **Layered Context** | Only relevant specs load (via JSONL files) | +| **Human Commits** | AI never commits - human validates first | +| **Pure Dispatcher** | Dispatch agent only orchestrates | + +--- + +## Core Components + +### 1. Workspace System + +Track development progress across sessions with per-developer isolation. + +``` +.trellis/workspace/ +├── index.md # Global overview +└── {developer}/ # Per-developer + ├── index.md # Personal index (@@@auto markers) + └── journal-N.md # Session journals (max 2000 lines) +``` + +**Key files**: `.trellis/.developer` (identity), journals (session history) + +### 2. Task System + +Track work items with phase-based execution. + +``` +.trellis/tasks/{MM-DD-slug-assignee}/ +├── task.json # Metadata, phases, branch +├── prd.md # Requirements +├── implement.jsonl # Context for implement agent +├── check.jsonl # Context for check agent +└── debug.jsonl # Context for debug agent +``` + +### 3. Spec System + +Maintain coding standards that get injected to agents. + +``` +.trellis/spec/ +├── frontend/ # Frontend guidelines +├── backend/ # Backend guidelines +└── guides/ # Thinking guides +``` + +### 4. Hooks System + +Automatically inject context and enforce quality. + +| Hook | When | Purpose | +| -------------------- | ----------------- | --------------------------------- | +| `SessionStart` | Session begins | Inject workflow, guidelines | +| `PreToolUse:Task` | Before sub-agent | Inject specs via JSONL | +| `SubagentStop:check` | Check agent stops | Enforce verification (Ralph Loop) | + +### 5. Agent System + +Specialized agents for different phases. + +| Agent | Purpose | Restriction | +| ----------- | --------------------- | ----------------------- | +| `dispatch` | Orchestrate pipeline | Pure dispatcher | +| `plan` | Evaluate requirements | Can reject unclear reqs | +| `research` | Find code patterns | Read-only | +| `implement` | Write code | No git commit | +| `check` | Review and self-fix | Ralph Loop controlled | +| `debug` | Fix issues | Precise fixes only | + +### 6. Multi-Agent Pipeline + +Run parallel isolated sessions via Git worktrees. + +``` +plan.py → start.py → Dispatch → implement → check → create-pr +``` + +--- + +## Customization Guide + +### Adding a Command + +1. Create `.claude/commands/trellis/my-command.md` +2. Update `trellis-local` skill with the change + +### Adding an Agent + +1. Create `.claude/agents/my-agent.md` with YAML frontmatter +2. Update `inject-subagent-context.py` to handle new agent type +3. Create `my-agent.jsonl` in task directories +4. Update `trellis-local` skill + +### Modifying Hooks + +1. Edit the hook script in `.claude/hooks/` +2. Document the change in `trellis-local` skill +3. Note which lines were modified and why + +### Extending Specs + +1. Create new category in `.trellis/spec/my-category/` +2. Add `index.md` and guideline files +3. Reference in JSONL context files +4. Update `trellis-local` skill + +### Changing Task Workflow + +1. Modify `next_action` array in `task.json` +2. Update dispatch or hook scripts as needed +3. Document in `trellis-local` skill + +--- + +## Resources + +Reference documents are organized by platform compatibility: + +``` +references/ +├── core/ # All Platforms (Claude Code, Cursor, etc.) +├── claude-code/ # Claude Code Only +├── how-to-modify/ # Modification Guides +└── meta/ # Documentation & Templates +``` + +### `core/` - All Platforms + +| Document | Content | +| -------------- | ---------------------------------------------- | +| `overview.md` | Core systems introduction | +| `files.md` | All `.trellis/` files with purposes | +| `workspace.md` | Workspace system, journals, developer identity | +| `tasks.md` | Task system, directories, JSONL context files | +| `specs.md` | Spec system, guidelines organization | +| `scripts.md` | Platform-independent scripts | + +### `claude-code/` - Claude Code Only + +| Document | Content | +| -------------------- | ---------------------------------- | +| `overview.md` | Claude Code features introduction | +| `hooks.md` | Hook system, context injection | +| `agents.md` | Agent types, invocation, Task tool | +| `ralph-loop.md` | Quality enforcement mechanism | +| `multi-session.md` | Parallel worktree sessions | +| `worktree-config.md` | worktree.yaml configuration | +| `scripts.md` | Claude Code only scripts | + +### `how-to-modify/` - Modification Guides + +| Document | Scenario | +| ------------------ | ------------------------------------- | +| `overview.md` | Quick reference for all modifications | +| `add-command.md` | Adding slash commands | +| `add-agent.md` | Adding new agent types | +| `add-spec.md` | Adding spec categories | +| `add-phase.md` | Adding workflow phases | +| `modify-hook.md` | Modifying hook behavior | +| `change-verify.md` | Changing verify commands | + +### `meta/` - Documentation + +| Document | Content | +| --------------------------- | -------------------------------- | +| `platform-compatibility.md` | Detailed platform support matrix | +| `self-iteration-guide.md` | How to document customizations | +| `trellis-local-template.md` | Template for project-local skill | + +--- + +## Quick Reference + +### Key Scripts + +| Script | Purpose | +| ---------------------- | -------------------- | +| `get_context.py` | Get session context | +| `task.py` | Task management | +| `add_session.py` | Record session | +| `multi_agent/start.py` | Start parallel agent | + +### Key Paths + +| Path | Purpose | +| ------------------------ | ------------------- | +| `.trellis/.developer` | Developer identity | +| `.trellis/.current-task` | Active task pointer | +| `.trellis/workflow.md` | Main workflow docs | +| `.claude/settings.json` | Hook configuration | + +--- + +## Upgrade Protocol + +When upgrading Trellis to a new version: + +1. **Compare** new meta-skill with current +2. **Review** changes in new version +3. **Check** `trellis-local` for conflicts +4. **Merge** carefully, preserving customizations +5. **Update** `trellis-local` with migration notes + +```markdown +## Changelog + +### 2026-02-01 - Upgraded to Trellis X.Y.Z + +- Merged new hook behavior from meta-skill +- Kept custom agent `my-agent` +- Updated check.jsonl template +``` diff --git a/Skills/new-skills/trellis-meta/references/claude-code/agents.md b/Skills/new-skills/trellis-meta/references/claude-code/agents.md new file mode 100644 index 0000000..e5857e4 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/agents.md @@ -0,0 +1,448 @@ +# Agents Reference + +Documentation for the Trellis agent system - specialized AI agents for different development phases. + +--- + +## Overview + +Trellis uses **specialized agents** for different tasks. Each agent has specific capabilities, restrictions, and context injection. + +**Key Insight**: Agents work in the **current directory** - no worktree needed. Multi-Session (worktree isolation) is a separate concept. + +--- + +## Agent Types + +| Agent | Purpose | Can Write | Git Commit | +| ----------- | --------------------- | -------------- | --------------- | +| `dispatch` | Orchestrate phases | No | Only via script | +| `plan` | Evaluate requirements | Yes (task dir) | No | +| `research` | Find patterns | No | No | +| `implement` | Write code | Yes | No | +| `check` | Review & self-fix | Yes | No | +| `debug` | Fix issues | Yes | No | + +--- + +## Agent Definitions + +Location: `.claude/agents/*.md` + +### Format + +```markdown +--- +name: agent-name +description: | + What this agent does. +tools: Read, Write, Edit, Bash, Glob, Grep +model: opus +--- + +# Agent Name + +## Core Responsibilities + +... + +## Workflow + +... + +## Forbidden Operations + +... +``` + +--- + +## Dispatch Agent + +**File**: `.claude/agents/dispatch.md` + +**Purpose**: Pure orchestrator - calls other agents in sequence. + +**Key Principle**: Does NOT read specs directly. Hooks inject context to subagents. + +**Tools**: `Read, Bash` + +**Workflow**: + +``` +1. Read .trellis/.current-task → find task directory +2. Read task.json → get next_action array +3. For each phase: + - implement → Task(subagent_type="implement") + - check → Task(subagent_type="check") + - finish → Task(subagent_type="check", prompt="[finish]...") + - create-pr → Bash("python3 ... create_pr.py") +``` + +**Forbidden**: + +- Reading spec files directly +- Modifying code +- Git operations (except via create-pr script) + +--- + +## Plan Agent + +**File**: `.claude/agents/plan.md` + +**Purpose**: Evaluate requirements and configure task directory. + +**Tools**: `Read, Bash, Glob, Grep, Task` + +**Capabilities**: + +- **REJECT** unclear/vague requirements +- Call Research Agent to analyze codebase +- Create `prd.md` with requirements +- Configure `task.json` (branch, scope, phases) +- Initialize JSONL context files + +**Rejection Criteria**: + +- Vague requirements ("make it better") +- Incomplete information +- Out of scope +- Potentially harmful +- Too large (should split) + +**Output**: + +``` +task-dir/ +├── task.json # Configured with branch, scope, dev_type +├── prd.md # Clear requirements +├── implement.jsonl +├── check.jsonl +└── debug.jsonl +``` + +--- + +## Research Agent + +**File**: `.claude/agents/research.md` + +**Purpose**: Find and explain code patterns. Pure research, no modifications. + +**Tools**: `Read, Glob, Grep, web search, chrome-devtools` + +**Allowed**: + +- Describe what exists +- Describe where it is +- Describe how it works +- Describe interactions + +**Forbidden** (unless explicitly asked): + +- Suggest improvements +- Criticize implementation +- Recommend refactoring +- Modify any files +- Git operations + +**Output Format**: + +```markdown +## Query Summary + +... + +## Files Found + +- path/to/file.ts - description + +## Code Patterns + +... + +## Related Specs + +... +``` + +--- + +## Implement Agent + +**File**: `.claude/agents/implement.md` + +**Purpose**: Write code following injected specs. + +**Tools**: `Read, Write, Edit, Bash, Glob, Grep` + +**Workflow**: + +1. Understand specs (from injected context) +2. Understand requirements (prd.md, info.md) +3. Implement features +4. Self-check (run lint/typecheck) + +**Forbidden**: + +- `git commit` +- `git push` +- `git merge` + +**Context Injection**: Hook injects `implement.jsonl` + `prd.md` + `info.md` + +--- + +## Check Agent + +**File**: `.claude/agents/check.md` + +**Purpose**: Review code and **self-fix** issues. + +**Tools**: `Read, Write, Edit, Bash, Glob, Grep` + +**Key Principle**: Fix issues yourself, don't just report them. + +**Workflow**: + +1. Get changes: `git diff` +2. Check against specs +3. Self-fix issues directly +4. Run verification (lint, typecheck) +5. Output completion markers + +**Controlled by**: Ralph Loop (SubagentStop hook) + +**Completion Markers**: + +``` +TYPECHECK_FINISH +LINT_FINISH +CODEREVIEW_FINISH +``` + +--- + +## Debug Agent + +**File**: `.claude/agents/debug.md` + +**Purpose**: Fix specific reported issues. + +**Tools**: `Read, Write, Edit, Bash, Glob, Grep` + +**Workflow**: + +1. Parse issues (prioritize P1 > P2 > P3) +2. Research if needed +3. Fix one by one +4. Verify each fix (run typecheck) + +**Forbidden**: + +- Refactor surrounding code +- Add new features +- Modify unrelated files +- Use non-null assertion (`x!`) +- Git commit + +--- + +## Invoking Agents + +Use the `Task` tool with `subagent_type`: + +```javascript +Task( + subagent_type: "implement", + prompt: "Implement the login feature", + model: "opus", + run_in_background: true // optional +) +``` + +### Agent Resolution + +1. Claude Code looks for `.claude/agents/{subagent_type}.md` +2. Loads agent definition (tools, model, instructions) +3. **PreToolUse hook fires** → `inject-subagent-context.py` +4. Hook injects context from JSONL files +5. Agent runs with full context + +--- + +## Context Injection + +### How It Works + +``` +Task(subagent_type="implement") called + │ + ▼ + PreToolUse hook fires + │ + ▼ +inject-subagent-context.py runs + │ + ├── Read .trellis/.current-task + │ + ├── Find task directory + │ + ├── Load implement.jsonl + │ {"file": ".trellis/spec/backend/index.md", "reason": "..."} + │ {"file": "src/services/auth.ts", "reason": "..."} + │ + ├── Read each file content + │ + └── Build new prompt: + # Implement Agent Task + ## Your Context + === .trellis/spec/backend/index.md === + [content] + === src/services/auth.ts === + [content] + ## Your Task + [original prompt] +``` + +### JSONL Files + +| File | Agent | Purpose | +| ----------------- | --------- | ----------------------------- | +| `implement.jsonl` | implement | Dev specs, patterns to follow | +| `check.jsonl` | check | Check specs, quality criteria | +| `debug.jsonl` | debug | Debug context, error reports | +| `research.jsonl` | research | (optional) Research scope | + +--- + +## Multi-Agent Workflow + +In the **current directory** (no worktree): + +``` +User request + │ + ▼ +Orchestrator (you or dispatch) + │ + ├── Task(subagent_type="research") + │ └── Returns: code patterns, relevant files + │ + ├── Task(subagent_type="implement") + │ └── Returns: implemented code + │ + ├── Task(subagent_type="check") + │ └── Returns: reviewed & fixed code + │ + └── Human commits +``` + +### Task Workflow (from /trellis:start) + +``` +1. User describes task +2. AI classifies (Question / Trivial / Development Task) +3. For Development Task: + a. Research Agent → analyze codebase + b. Create task directory + JSONL files + c. task.py start → set .current-task + d. Implement Agent → write code + e. Check Agent → review & fix + f. Human tests and commits +``` + +--- + +## Adding Custom Agents + +### 1. Create Definition + +`.claude/agents/my-agent.md`: + +```markdown +--- +name: my-agent +description: | + What this agent specializes in. +tools: Read, Write, Edit, Bash, Glob, Grep +model: opus +--- + +# My Agent + +## Core Responsibilities + +1. ... + +## Workflow + +1. ... + +## Forbidden Operations + +- ... +``` + +### 2. Update Hook + +Edit `.claude/hooks/inject-subagent-context.py`: + +```python +# Add constant +AGENT_MY_AGENT = "my-agent" + +# Add to list +AGENTS_ALL = (..., AGENT_MY_AGENT) + +# Add context function +def get_my_agent_context(repo_root, task_dir): + # Load my-agent.jsonl or fallback + ... + +# Add to main switch +elif subagent_type == AGENT_MY_AGENT: + context = get_my_agent_context(repo_root, task_dir) + new_prompt = build_my_agent_prompt(original_prompt, context) +``` + +### 3. Create JSONL + +In task directories, create `my-agent.jsonl`: + +```jsonl +{ + "file": ".trellis/spec/my-spec.md", + "reason": "My agent spec" +} +``` + +### 4. (Optional) Add to Dispatch + +Update `task.json` default phases: + +```json +"next_action": [ + {"phase": 1, "action": "my-agent"}, + ... +] +``` + +--- + +## vs Multi-Session + +| Aspect | Multi-Agent | Multi-Session | +| ------------- | --------------------------- | -------------------------- | +| **What** | Multiple agents in sequence | Parallel isolated sessions | +| **Where** | Current directory | Separate worktrees | +| **Isolation** | Shared filesystem | Separate filesystems | +| **Use case** | Normal development | Parallel tasks | +| **Worktree** | Not needed | Required | + +Multi-Agent is the **agent system** - dispatch calling implement, check, etc. + +Multi-Session is **parallel execution** - multiple worktrees running simultaneously. + +They can combine: Multi-Session runs Multi-Agent workflows in each worktree. diff --git a/Skills/new-skills/trellis-meta/references/claude-code/hooks.md b/Skills/new-skills/trellis-meta/references/claude-code/hooks.md new file mode 100644 index 0000000..209df4a --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/hooks.md @@ -0,0 +1,245 @@ +# Hooks System + +Claude Code hooks for automatic context injection and quality enforcement. + +--- + +## Overview + +Hooks intercept Claude Code lifecycle events to inject context and enforce quality. + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ HOOK LIFECYCLE │ +│ │ +│ Session Start ──► SessionStart hook ──► Inject workflow context │ +│ │ +│ Task() called ──► PreToolUse:Task hook ──► Inject specs from JSONL │ +│ │ +│ Agent stops ──► SubagentStop hook ──► Ralph Loop verification │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## Configuration + +### `.claude/settings.json` + +```json +{ + "hooks": { + "SessionStart": [ + { + "matcher": "startup", + "hooks": [ + { + "type": "command", + "command": "python3 \"$CLAUDE_PROJECT_DIR/.claude/hooks/session-start.py\"", + "timeout": 10 + } + ] + } + ], + "PreToolUse": [ + { + "matcher": "Task", + "hooks": [ + { + "type": "command", + "command": "python3 \"$CLAUDE_PROJECT_DIR/.claude/hooks/inject-subagent-context.py\"", + "timeout": 30 + } + ] + } + ], + "SubagentStop": [ + { + "matcher": "check", + "hooks": [ + { + "type": "command", + "command": "python3 \"$CLAUDE_PROJECT_DIR/.claude/hooks/ralph-loop.py\"", + "timeout": 300 + } + ] + } + ] + } +} +``` + +--- + +## SessionStart Hook + +### Purpose + +Inject initial context when a Claude Code session starts. + +### Script: `session-start.py` + +**Injects:** + +- Developer identity from `.trellis/.developer` +- Git status and recent commits +- Current task (if `.trellis/.current-task` exists) +- `workflow.md` content +- All `spec/*/index.md` files +- Start instructions + +**Output format:** + +```json +{ + "result": "continue", + "message": "# Session Context\n\n## Developer\ntaosu\n\n## Git Status\n..." +} +``` + +--- + +## PreToolUse:Task Hook + +### Purpose + +Inject relevant specs when a subagent is invoked. + +### Script: `inject-subagent-context.py` + +**Trigger:** When `Task(subagent_type="...")` is called. + +**Flow:** + +1. Read `subagent_type` from tool input +2. Find current task from `.trellis/.current-task` +3. Load `{subagent_type}.jsonl` from task directory +4. Read each file listed in JSONL +5. Build augmented prompt with context +6. Update `task.json` with current phase + +**Output format:** + +```json +{ + "result": "continue", + "updatedInput": { + "prompt": "# Implement Agent Task\n\n## Context\n...\n\n## Your Task\n..." + } +} +``` + +### JSONL Format + +```jsonl +{"file": ".trellis/spec/backend/index.md", "reason": "Backend guidelines"} +{"file": "src/services/auth.ts", "reason": "Existing pattern"} +{"file": ".trellis/tasks/01-31-add-login/prd.md", "reason": "Requirements"} +``` + +--- + +## SubagentStop Hook + +### Purpose + +Quality enforcement via Ralph Loop. + +### Script: `ralph-loop.py` + +**Trigger:** When Check Agent tries to stop. + +**Flow:** + +1. Read verify commands from `worktree.yaml` +2. Execute each command (pnpm lint, pnpm typecheck, etc.) +3. If all pass → allow stop +4. If any fail → block stop, agent continues + +→ See [ralph-loop.md](./ralph-loop.md) for details. + +--- + +## Hook Scripts Location + +``` +.claude/hooks/ +├── session-start.py # SessionStart handler +├── inject-subagent-context.py # PreToolUse:Task handler +└── ralph-loop.py # SubagentStop:check handler +``` + +--- + +## Environment Variables + +Available in hook scripts: + +| Variable | Description | +| -------------------- | ------------------------------------------- | +| `CLAUDE_PROJECT_DIR` | Project root directory | +| `HOOK_EVENT` | Event type (SessionStart, PreToolUse, etc.) | +| `TOOL_NAME` | Tool being called (for PreToolUse) | +| `TOOL_INPUT` | JSON string of tool input | +| `SUBAGENT_TYPE` | Agent type (for SubagentStop) | + +--- + +## Hook Response Format + +### Continue (allow operation) + +```json +{ + "result": "continue", + "message": "Optional message to inject" +} +``` + +### Continue with modified input + +```json +{ + "result": "continue", + "updatedInput": { + "prompt": "Modified prompt..." + } +} +``` + +### Block (prevent operation) + +```json +{ + "result": "block", + "message": "Reason for blocking" +} +``` + +--- + +## Debugging Hooks + +### View hook output + +```bash +# Check if hooks are configured +cat .claude/settings.json | grep -A 20 '"hooks"' + +# Test session-start manually +python3 .claude/hooks/session-start.py + +# Test inject-context (needs TOOL_INPUT env var) +TOOL_INPUT='{"subagent_type":"implement","prompt":"test"}' \ + python3 .claude/hooks/inject-subagent-context.py +``` + +### Common Issues + +| Issue | Cause | Solution | +| ------------------- | --------------------- | ---------------------------- | +| Hook not running | Wrong matcher | Check settings.json matcher | +| Timeout | Script too slow | Increase timeout or optimize | +| No context injected | Missing .current-task | Run `task.py start` | +| JSONL not found | Wrong task directory | Check .current-task path | diff --git a/Skills/new-skills/trellis-meta/references/claude-code/multi-session.md b/Skills/new-skills/trellis-meta/references/claude-code/multi-session.md new file mode 100644 index 0000000..1631d4d --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/multi-session.md @@ -0,0 +1,482 @@ +# Multi-Session Reference + +Documentation for parallel isolated sessions using Git worktrees. + +--- + +## Overview + +Multi-Session enables **parallel, isolated development sessions** using Git worktrees. Each session runs in its own directory with its own branch. + +**Key Distinction**: + +- **Multi-Agent** = Multiple agents in current directory (dispatch → implement → check) +- **Multi-Session** = Parallel sessions in separate worktrees (this document) + +--- + +## When to Use Multi-Session + +| Scenario | Use Multi-Session? | +| ----------------------------------------------- | -------------------- | +| Normal task in current branch | No - use Multi-Agent | +| Long-running task, want to work on other things | Yes | +| Multiple independent tasks in parallel | Yes | +| Task needs clean isolated environment | Yes | +| Quick fix or small change | No | + +--- + +## Architecture + +``` +┌────────────────────────────────────────────────────────────────────────────┐ +│ MAIN REPOSITORY │ +│ (your current directory) │ +│ │ +│ /trellis:parallel → Configure task → start.py │ +│ │ │ +│ │ Creates worktree │ +│ │ Starts agent │ +│ ▼ │ +└───────────────────────────────────────────┼─────────────────────────────────┘ + │ + ┌─────────────────────────────┼─────────────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌──────────────────────┐ ┌──────────────────────┐ ┌──────────────────────┐ +│ WORKTREE 1 │ │ WORKTREE 2 │ │ WORKTREE 3 │ +│ feature/add-login │ │ feature/user-profile │ │ fix/api-bug │ +│ │ │ │ │ │ +│ ┌──────────────────┐ │ │ ┌──────────────────┐ │ │ ┌──────────────────┐ │ +│ │ Dispatch Agent │ │ │ │ Dispatch Agent │ │ │ │ Dispatch Agent │ │ +│ │ ↓ │ │ │ │ ↓ │ │ │ │ ↓ │ │ +│ │ Implement Agent │ │ │ │ Implement Agent │ │ │ │ Implement Agent │ │ +│ │ ↓ │ │ │ │ ↓ │ │ │ │ ↓ │ │ +│ │ Check Agent │ │ │ │ Check Agent │ │ │ │ Check Agent │ │ +│ │ ↓ │ │ │ │ ↓ │ │ │ │ ↓ │ │ +│ │ create_pr.py │ │ │ │ create_pr.py │ │ │ │ create_pr.py │ │ +│ └──────────────────┘ │ │ └──────────────────┘ │ │ └──────────────────┘ │ +│ │ │ │ │ │ +│ Session: abc123 │ │ Session: def456 │ │ Session: ghi789 │ +│ PID: 12345 │ │ PID: 12346 │ │ PID: 12347 │ +└──────────────────────┘ └──────────────────────┘ └──────────────────────┘ + +Location: ../worktrees/ (default) +``` + +--- + +## Git Worktree + +### What is a Worktree? + +Git worktrees allow multiple working directories from one repository: + +``` +/project/ # Main repo (main branch) +/project/../worktrees/ # Default: ../worktrees +├── feature/add-login/ # Worktree 1 (own branch) +├── feature/user-profile/ # Worktree 2 (own branch) +└── fix/api-bug/ # Worktree 3 (own branch) +``` + +### Benefits + +| Benefit | Description | +| ----------------------- | ----------------------------------- | +| **True isolation** | Separate filesystem per session | +| **Own branch** | Each worktree on its own branch | +| **Parallel execution** | Multiple agents work simultaneously | +| **Clean state** | Start fresh, no interference | +| **Session persistence** | Each has `.session-id` for resume | +| **Easy cleanup** | Remove worktree = remove everything | + +--- + +## Configuration + +### worktree.yaml + +Location: `.trellis/worktree.yaml` + +```yaml +# Where worktrees are created (relative to project) +# Default: ../worktrees +worktree_dir: ../worktrees + +# Files to copy to each worktree (default: []) +copy: + - .trellis/.developer # Developer identity + - .env # Environment variables + - .env.local # Local overrides + +# Commands after worktree creation (default: []) +post_create: + - npm install # Install dependencies + # - pnpm install --frozen-lockfile + +# Verification commands for Ralph Loop (default: []) +verify: + - pnpm lint + - pnpm typecheck +``` + +### Task Configuration + +Each session needs a configured task: + +```json +// task.json +{ + "branch": "feature/add-login", // Required for worktree + "base_branch": "main", + "worktree_path": null, // Set by start.py + "current_phase": 0, + "next_action": [ + { "phase": 1, "action": "implement" }, + { "phase": 2, "action": "check" }, + { "phase": 3, "action": "finish" }, + { "phase": 4, "action": "create-pr" } + ] +} +``` + +--- + +## Scripts + +### start.py - Start Session + +Creates worktree and starts agent. + +```bash +python3 .trellis/scripts/multi_agent/start.py +``` + +**Actions**: + +1. Read `task.json` for branch name +2. Create git worktree: + ```bash + git worktree add -b ../trellis-worktrees/ + ``` +3. Copy files from `worktree.yaml` copy list +4. Copy task directory to worktree +5. Run `post_create` hooks +6. Set `.trellis/.current-task` in worktree +7. Start Claude Dispatch Agent: + ```bash + claude -p --agent dispatch \ + --session-id \ + --dangerously-skip-permissions \ + --output-format stream-json \ + --verbose "Start the pipeline" + ``` +8. Register to `registry.json` + +**Example**: + +```bash +python3 .trellis/scripts/multi_agent/start.py .trellis/tasks/01-31-add-login-taosu +# Output: Started agent in ../trellis-worktrees/feature/add-login +``` + +--- + +### status.py - Monitor Sessions + +Check all running sessions. + +```bash +# Overview +python3 .trellis/scripts/multi_agent/status.py + +# Detailed view +python3 .trellis/scripts/multi_agent/status.py --detail + +# Watch mode +python3 .trellis/scripts/multi_agent/status.py --watch + +# View logs +python3 .trellis/scripts/multi_agent/status.py --log + +# Show registry +python3 .trellis/scripts/multi_agent/status.py --registry +``` + +**Output**: + +``` +Active Sessions: +┌──────────────┬──────────┬────────────────┬──────────┬───────────┐ +│ Task │ Status │ Phase │ Elapsed │ Files │ +├──────────────┼──────────┼────────────────┼──────────┼───────────┤ +│ add-login │ Running │ 2/4 (check) │ 15m 32s │ 5 changed │ +│ fix-api │ Stopped │ 1/4 (implement)│ 8m 15s │ 2 changed │ +└──────────────┴──────────┴────────────────┴──────────┴───────────┘ + +Resume stopped sessions: + cd ../trellis-worktrees/feature/fix-api && claude --resume +``` + +--- + +### create_pr.py - Create PR + +Creates PR from worktree changes. + +```bash +python3 .trellis/scripts/multi_agent/create_pr.py [--dry-run] +``` + +**Actions**: + +1. Stage changes: `git add -A` +2. Exclude: `git reset .trellis/workspace/` +3. Commit: `feat(): ` +4. Push to remote +5. Create Draft PR: `gh pr create --draft` +6. Update task.json: `status: "completed"`, `pr_url` + +--- + +### cleanup.py - Remove Worktrees + +Clean up after completion. + +```bash +# Specific worktree +python3 .trellis/scripts/multi_agent/cleanup.py + +# All merged worktrees +python3 .trellis/scripts/multi_agent/cleanup.py --merged + +# All worktrees (with confirmation) +python3 .trellis/scripts/multi_agent/cleanup.py --all +``` + +**Actions**: + +1. Archive task to `.trellis/tasks/archive/YYYY-MM/` +2. Remove from registry +3. Remove worktree: `git worktree remove ` +4. Optionally delete branch + +--- + +### plan.py - Auto-Configure Task + +Launches Plan Agent to create task configuration. + +```bash +python3 .trellis/scripts/multi_agent/plan.py \ + --name \ + --type \ + --requirement "" +``` + +**Plan Agent**: + +1. Evaluates requirements (can REJECT) +2. Calls Research Agent +3. Creates `prd.md` +4. Configures `task.json` +5. Initializes JSONL files + +--- + +## Session Registry + +Tracks all running sessions. + +**Location**: `.trellis/workspace//.agents/registry.json` + +```json +{ + "agents": [ + { + "id": "feature-add-login", + "worktree_path": "/abs/path/to/trellis-worktrees/feature/add-login", + "pid": 12345, + "started_at": "2026-01-31T10:30:00", + "task_dir": ".trellis/tasks/01-31-add-login-taosu" + } + ] +} +``` + +**API** (`common/registry.py`): + +```python +registry_add_agent(agent_id, worktree_path, pid, task_dir) +registry_remove_by_id(agent_id) +registry_remove_by_worktree(worktree_path) +registry_search_agent(pattern) +registry_list_agents() +``` + +--- + +## Complete Workflow + +### 1. Configure Task + +```bash +# Create task +python3 .trellis/scripts/task.py create "Add login" --slug add-login + +# Configure +python3 .trellis/scripts/task.py init-context fullstack +python3 .trellis/scripts/task.py set-branch feature/add-login + +# Write prd.md +# ... +``` + +### 2. Start Session + +```bash +python3 .trellis/scripts/multi_agent/start.py +``` + +### 3. Monitor + +```bash +python3 .trellis/scripts/multi_agent/status.py --watch add-login +``` + +### 4. After Completion + +```bash +# PR auto-created +# Review on GitHub, merge + +# Cleanup +python3 .trellis/scripts/multi_agent/cleanup.py feature/add-login +``` + +--- + +## Parallel Execution + +Start multiple sessions: + +```bash +# Session 1 +python3 .trellis/scripts/multi_agent/start.py .trellis/tasks/01-31-add-login + +# Session 2 (immediately) +python3 .trellis/scripts/multi_agent/start.py .trellis/tasks/01-31-fix-api + +# Session 3 +python3 .trellis/scripts/multi_agent/start.py .trellis/tasks/01-31-update-docs + +# Monitor all +python3 .trellis/scripts/multi_agent/status.py +``` + +Each runs independently: + +- Own worktree +- Own branch +- Own Claude process +- Own registry entry + +--- + +## Resuming Sessions + +If a session stops: + +```bash +# Find session info +python3 .trellis/scripts/multi_agent/status.py --detail + +# Resume +cd ../trellis-worktrees/feature/task-name +claude --resume +``` + +--- + +## Ralph Loop + +Quality enforcement for Check Agent in sessions. + +**Mechanism**: + +1. Check Agent completes +2. SubagentStop hook fires +3. `ralph-loop.py` runs verify commands +4. All pass → allow stop +5. Any fail → block, continue agent + +**Constants**: + +| Constant | Value | Description | +| ----------------------- | ----- | ----------------------- | +| `MAX_ITERATIONS` | 5 | Maximum loop iterations | +| `STATE_TIMEOUT_MINUTES` | 30 | State timeout | +| Command timeout | 120s | Per verify command | + +**Configuration** (`worktree.yaml`): + +```yaml +verify: + - pnpm lint + - pnpm typecheck +``` + +**State** (`.trellis/.ralph-state.json`): + +```json +{ + "task": ".trellis/tasks/01-31-add-login", + "iteration": 2, + "started_at": "2026-01-31T10:30:00" +} +``` + +**Limits**: Max 5 iterations (`MAX_ITERATIONS`), 30min timeout (`STATE_TIMEOUT_MINUTES`), 120s per command + +--- + +## Troubleshooting + +### Session Not Starting + +1. Check `worktree.yaml` exists +2. Verify branch name doesn't exist +3. Check `post_create` hooks +4. Look at start.py output + +### Session Stuck + +1. Check Ralph Loop iteration (max 5) +2. Verify `verify` commands +3. Manually run verify commands +4. Check `.trellis/.ralph-state.json` + +### Worktree Issues + +```bash +# Force remove +git worktree remove --force + +# Prune stale +git worktree prune + +# List all +git worktree list +``` + +### Registry Out of Sync + +```bash +# View +python3 .trellis/scripts/multi_agent/status.py --registry + +# Manual edit +vim .trellis/workspace//.agents/registry.json +``` diff --git a/Skills/new-skills/trellis-meta/references/claude-code/overview.md b/Skills/new-skills/trellis-meta/references/claude-code/overview.md new file mode 100644 index 0000000..8634c8f --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/overview.md @@ -0,0 +1,129 @@ +# Claude Code Features Overview + +These features require **Claude Code** (or **iFlow**, which shares the same hook system) and don't work on other platforms. + +--- + +## Why Claude Code Only? + +Claude Code provides unique capabilities: + +| Feature | Claude Code | Why Required | +| -------------- | ----------- | -------------------------------- | +| Hooks | ✅ | Hook system for lifecycle events | +| Task tool | ✅ | Subagent invocation with context | +| `--agent` flag | ✅ | Load agent definitions | +| `--resume` | ✅ | Session persistence | +| CLI scripting | ✅ | Automation with `claude` command | + +--- + +## Feature Categories + +### Hooks System + +Automatic context injection and quality enforcement. + +| Hook | When | Purpose | +| -------------------- | ----------------- | ----------------------- | +| `SessionStart` | Session begins | Inject workflow context | +| `PreToolUse:Task` | Before subagent | Inject specs via JSONL | +| `SubagentStop:check` | Check agent stops | Ralph Loop enforcement | + +→ See [hooks.md](./hooks.md) + +### Agent System + +Specialized agents for different development phases. + +| Agent | Purpose | +| ----------- | --------------------- | +| `dispatch` | Orchestrate pipeline | +| `implement` | Write code | +| `check` | Review and self-fix | +| `debug` | Fix issues | +| `research` | Find patterns | +| `plan` | Evaluate requirements | + +→ See [agents.md](./agents.md) + +### Ralph Loop + +Quality enforcement for Check Agent. + +- Runs verify commands when Check Agent stops +- Blocks completion until all pass +- Max 5 iterations, 30min timeout + +→ See [ralph-loop.md](./ralph-loop.md) + +### Multi-Session + +Parallel isolated sessions using Git worktrees. + +- Each session in separate worktree +- Own branch, own Claude process +- Automated PR creation + +→ See [multi-session.md](./multi-session.md) + +### worktree.yaml + +Configuration for Multi-Session and Ralph Loop. + +→ See [worktree-config.md](./worktree-config.md) + +--- + +## Documents in This Directory + +| Document | Content | +| -------------------- | -------------------------------- | +| `hooks.md` | Hook system, context injection | +| `agents.md` | Agent types, invocation, context | +| `ralph-loop.md` | Quality enforcement mechanism | +| `multi-session.md` | Parallel worktree sessions | +| `worktree-config.md` | worktree.yaml configuration | +| `scripts.md` | Claude Code only scripts | + +--- + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ CLAUDE CODE INTEGRATION │ +│ │ +│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │ +│ │ SessionStart │ │ PreToolUse │ │ SubagentStop │ │ +│ │ Hook │ │ Hook │ │ Hook │ │ +│ └───────┬────────┘ └───────┬────────┘ └───────┬────────┘ │ +│ │ │ │ │ +│ ▼ ▼ ▼ │ +│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │ +│ │ session-start │ │ inject-context │ │ ralph-loop │ │ +│ │ .py │ │ .py │ │ .py │ │ +│ └───────┬────────┘ └───────┬────────┘ └───────┬────────┘ │ +│ │ │ │ │ +│ ▼ ▼ ▼ │ +│ ┌─────────────────────────────────────────────────────────────────┐ │ +│ │ CORE SYSTEMS (File-Based) │ │ +│ │ Workspace │ Tasks │ Specs │ Commands │ Scripts │ │ +│ └─────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## Checking Claude Code Availability + +```bash +# Check if Claude Code is installed +claude --version + +# Verify hooks are configured +cat .claude/settings.json | grep -A 5 '"hooks"' +``` + +If hooks aren't present, Claude Code features won't work. diff --git a/Skills/new-skills/trellis-meta/references/claude-code/ralph-loop.md b/Skills/new-skills/trellis-meta/references/claude-code/ralph-loop.md new file mode 100644 index 0000000..9f28314 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/ralph-loop.md @@ -0,0 +1,263 @@ +# Ralph Loop + +Quality enforcement mechanism for Check Agent. + +--- + +## Overview + +Ralph Loop prevents Check Agent from stopping until all verification commands pass. + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ RALPH LOOP │ +│ │ +│ Check Agent completes │ +│ │ │ +│ ▼ │ +│ SubagentStop hook fires ──► ralph-loop.py runs │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────────┐ │ +│ │ Run verify commands from worktree.yaml: │ │ +│ │ │ │ +│ │ pnpm lint → exit 0 ✓ │ │ +│ │ pnpm typecheck → exit 0 ✓ │ │ +│ │ pnpm test → exit 1 ✗ │ │ +│ │ │ │ +│ │ Result: FAIL (test failed) │ │ +│ └─────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────┐ ┌─────────────────┐ │ +│ │ All pass? │──── YES ────►│ Allow stop │ │ +│ └────────┬────────┘ └─────────────────┘ │ +│ │ NO │ +│ ▼ │ +│ ┌─────────────────┐ │ +│ │ Block stop │ ◄─── Agent continues to fix issues │ +│ │ Inject errors │ │ +│ └─────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## Configuration + +### `worktree.yaml` + +```yaml +verify: + - pnpm lint + - pnpm typecheck + # - pnpm test + # - pnpm build +``` + +--- + +## Constants + +| Constant | Value | Description | +| ----------------------- | ----- | --------------------- | +| `MAX_ITERATIONS` | 5 | Maximum loop attempts | +| `STATE_TIMEOUT_MINUTES` | 30 | State file timeout | +| `COMMAND_TIMEOUT` | 120s | Per-command timeout | + +--- + +## State File + +### `.trellis/.ralph-state.json` + +Tracks loop state across iterations. + +```json +{ + "task": ".trellis/tasks/01-31-add-login", + "iteration": 2, + "started_at": "2026-01-31T10:30:00" +} +``` + +--- + +## Flow + +### Iteration 1 + +1. Check Agent completes work +2. SubagentStop hook fires +3. `ralph-loop.py` creates state file (iteration=1) +4. Runs verify commands +5. If fail: block stop, inject error messages +6. Check Agent continues fixing + +### Iteration 2-5 + +1. Check Agent tries to stop again +2. Hook reads state file, increments iteration +3. Runs verify commands again +4. Repeat until pass or max iterations + +### Max Iterations Reached + +1. Iteration 5 still fails +2. Hook allows stop (prevents infinite loop) +3. Logs warning about unresolved issues + +### Timeout + +1. State file older than 30 minutes +2. Hook resets state (fresh start) +3. Treats as iteration 1 + +--- + +## Verify Commands + +### Execution Order + +Commands run in config order. First failure stops execution. + +```yaml +verify: + - pnpm lint # Runs first (fast) + - pnpm typecheck # Runs second + - pnpm test # Runs third (slow) +``` + +**Recommendation**: Order fast → slow + +### Exit Codes + +- Exit 0 = Pass +- Non-zero = Fail + +### Timeout + +Each command has 120 second timeout. Long-running tests may need: + +- Splitting into smaller test suites +- Running only fast tests in Ralph Loop +- Adjusting `COMMAND_TIMEOUT` in script + +--- + +## Fallback: Completion Markers + +If `worktree.yaml` has no `verify` config, Ralph Loop uses completion markers. + +### How It Works + +1. Read `check.jsonl` for reason fields +2. Generate expected markers: `{REASON}_FINISH` +3. Check agent output for all markers +4. Missing marker = block stop + +### Example + +```jsonl +{"file": "...", "reason": "typecheck"} +{"file": "...", "reason": "lint"} +``` + +Expected markers: + +- `TYPECHECK_FINISH` +- `LINT_FINISH` + +--- + +## Debugging + +### Check State + +```bash +cat .trellis/.ralph-state.json +``` + +### Manual Verify + +```bash +# Run verify commands manually +pnpm lint && pnpm typecheck && pnpm test +``` + +### Reset State + +```bash +rm .trellis/.ralph-state.json +``` + +### View Hook Output + +Check agent output for Ralph Loop messages: + +- "Verification passed" = all commands succeeded +- "Verification failed" = blocking, shows errors +- "Max iterations reached" = giving up + +--- + +## Customizing + +### Add Test Verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test +``` + +### Add Build Verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm build +``` + +### Different Languages + +**Go:** + +```yaml +verify: + - go fmt ./... + - go vet ./... + - go test ./... +``` + +**Python:** + +```yaml +verify: + - ruff check . + - mypy . + - pytest +``` + +**Rust:** + +```yaml +verify: + - cargo fmt --check + - cargo clippy + - cargo test +``` + +--- + +## Disabling Ralph Loop + +To disable for a project: + +1. Remove `verify` from `worktree.yaml` +2. Or remove SubagentStop hook from settings.json + +**Warning**: Without Ralph Loop, code quality isn't automatically enforced. diff --git a/Skills/new-skills/trellis-meta/references/claude-code/scripts.md b/Skills/new-skills/trellis-meta/references/claude-code/scripts.md new file mode 100644 index 0000000..658dd7f --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/scripts.md @@ -0,0 +1,258 @@ +# Claude Code Scripts + +Scripts that require Claude Code CLI and hook system. + +--- + +## Overview + +These scripts require: + +- `claude` CLI command +- Hook system for context injection +- `--resume` for session persistence + +``` +.trellis/scripts/ +├── common/ +│ ├── worktree.py # Worktree utilities +│ └── registry.py # Agent registry +│ +└── multi_agent/ # Multi-Session scripts + ├── plan.py # Launch Plan agent + ├── start.py # Start worktree agent + ├── status.py # Monitor agents + ├── create_pr.py # Create pull request + └── cleanup.py # Cleanup worktree +``` + +--- + +## Multi-Session Scripts + +### `multi_agent/plan.py` + +Launch Plan Agent to create task configuration. + +```bash +python3 .trellis/scripts/multi_agent/plan.py \ + --name \ + --type \ + --requirement "" +``` + +**Options:** + +- `--name` - Task slug +- `--type` - `frontend`, `backend`, `fullstack` +- `--requirement` - Task description + +**Actions:** + +1. Creates task directory +2. Launches Plan Agent via `claude` +3. Plan Agent can REJECT unclear requirements +4. Creates `prd.md`, `task.json`, JSONL files + +--- + +### `multi_agent/start.py` + +Start agent in a new worktree. + +```bash +python3 .trellis/scripts/multi_agent/start.py +``` + +**Actions:** + +1. Read `task.json` for branch name +2. Create git worktree: + ```bash + git worktree add -b ../worktrees/ + ``` +3. Copy files from `worktree.yaml` copy list +4. Copy task directory to worktree +5. Run `post_create` commands +6. Set `.trellis/.current-task` +7. Start Claude Dispatch Agent: + ```bash + claude -p --agent dispatch \ + --session-id \ + --dangerously-skip-permissions \ + --output-format stream-json \ + "Start the pipeline" + ``` +8. Register to `registry.json` + +--- + +### `multi_agent/status.py` + +Monitor running sessions. + +```bash +# Overview of all sessions +python3 .trellis/scripts/multi_agent/status.py + +# Detailed view +python3 .trellis/scripts/multi_agent/status.py --detail + +# Watch mode (auto-refresh) +python3 .trellis/scripts/multi_agent/status.py --watch + +# View logs +python3 .trellis/scripts/multi_agent/status.py --log + +# Show registry +python3 .trellis/scripts/multi_agent/status.py --registry +``` + +**Output:** + +``` +Active Sessions: +┌──────────────┬──────────┬────────────────┬──────────┬───────────┐ +│ Task │ Status │ Phase │ Elapsed │ Files │ +├──────────────┼──────────┼────────────────┼──────────┼───────────┤ +│ add-login │ Running │ 2/4 (check) │ 15m 32s │ 5 changed │ +│ fix-api │ Stopped │ 1/4 (implement)│ 8m 15s │ 2 changed │ +└──────────────┴──────────┴────────────────┴──────────┴───────────┘ +``` + +--- + +### `multi_agent/create_pr.py` + +Create pull request from worktree changes. + +```bash +python3 .trellis/scripts/multi_agent/create_pr.py [--dry-run] +``` + +**Actions:** + +1. Stage changes: `git add -A` +2. Exclude workspace: `git reset .trellis/workspace/` +3. Commit with conventional format +4. Push to remote +5. Create Draft PR via `gh pr create --draft` +6. Update task.json with `pr_url` + +--- + +### `multi_agent/cleanup.py` + +Clean up completed worktrees. + +```bash +# Specific worktree +python3 .trellis/scripts/multi_agent/cleanup.py + +# All merged worktrees +python3 .trellis/scripts/multi_agent/cleanup.py --merged + +# All worktrees (with confirmation) +python3 .trellis/scripts/multi_agent/cleanup.py --all +``` + +**Actions:** + +1. Archive task to `.trellis/tasks/archive/YYYY-MM/` +2. Remove from registry +3. Remove worktree: `git worktree remove ` +4. Optionally delete branch + +--- + +## Common Utilities + +### `common/worktree.py` + +Worktree management utilities. + +```python +from common.worktree import ( + read_worktree_config, # Read worktree.yaml + get_worktree_path, # Get path for branch + create_worktree, # Create new worktree + remove_worktree, # Remove worktree +) +``` + +### `common/registry.py` + +Agent registry for tracking running sessions. + +```python +from common.registry import ( + registry_add_agent, # Add agent to registry + registry_remove_by_id, # Remove by agent ID + registry_remove_by_worktree, # Remove by path + registry_search_agent, # Search by pattern + registry_list_agents, # List all agents +) +``` + +**Registry file:** `.trellis/workspace//.agents/registry.json` + +```json +{ + "agents": [ + { + "id": "feature-add-login", + "worktree_path": "/abs/path/to/worktrees/feature/add-login", + "pid": 12345, + "started_at": "2026-01-31T10:30:00", + "task_dir": ".trellis/tasks/01-31-add-login-taosu" + } + ] +} +``` + +--- + +## Claude CLI Usage + +### Agent Mode + +```bash +claude --agent dispatch "Start the pipeline" +``` + +### Print Mode (non-interactive) + +```bash +claude -p "Do something" +``` + +### Session Resume + +```bash +claude --resume +``` + +### Automation Mode + +```bash +claude --dangerously-skip-permissions -p "..." +``` + +### JSON Output + +```bash +claude --output-format stream-json -p "..." +``` + +--- + +## Resuming Stopped Sessions + +```bash +# Find session info +python3 .trellis/scripts/multi_agent/status.py --detail + +# Resume in worktree +cd ../worktrees/feature/task-name +claude --resume +``` diff --git a/Skills/new-skills/trellis-meta/references/claude-code/worktree-config.md b/Skills/new-skills/trellis-meta/references/claude-code/worktree-config.md new file mode 100644 index 0000000..209468e --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/claude-code/worktree-config.md @@ -0,0 +1,471 @@ +# worktree.yaml Configuration Reference + +Complete guide to `.trellis/worktree.yaml` configuration. + +--- + +## Overview + +`worktree.yaml` configures **both** Multi-Session (worktree isolation) **and** some Multi-Agent behaviors (like Ralph Loop). + +```yaml +# .trellis/worktree.yaml + +# Multi-Session only +worktree_dir: ../worktrees # Default value +copy: + - .trellis/.developer + - .env +post_create: + - npm install + +# Both Multi-Session AND Multi-Agent +verify: + - pnpm lint + - pnpm typecheck +``` + +**Note**: Trellis uses a custom YAML parser (not PyYAML). Supports basic key-value pairs and arrays; complex nested structures may not work. + +--- + +## Configuration Sections + +### Which Config Affects What? + +| Config | Multi-Agent (current dir) | Multi-Session (worktree) | +| -------------- | ------------------------- | ----------------------------------- | +| `worktree_dir` | ❌ Not used | ✅ Worktree location | +| `copy` | ❌ Not used | ✅ Files copied to worktree | +| `post_create` | ❌ Not used | ✅ Commands after worktree creation | +| `verify` | ✅ Used by Ralph Loop | ✅ Used by Ralph Loop | + +**Key point**: `verify` config applies to BOTH modes! + +--- + +## Full Configuration + +```yaml +# ============================================================================= +# MULTI-SESSION ONLY - Only used in worktree mode +# ============================================================================= + +# Worktree creation location (relative to project root) +# Default: ../worktrees +worktree_dir: ../worktrees + +# Files to copy to each worktree +# These files are not in git, need manual copy +# Default: [] (empty array) +copy: + - .trellis/.developer # Developer identity + - .env # Environment variables + - .env.local # Local overrides + # - .npmrc # npm config + # - credentials.json # Credential files + +# Commands to run after worktree creation +# Runs in order, stops on first failure +# Default: [] (empty array) +post_create: + - npm install # or pnpm install + # - pnpm install --frozen-lockfile + # - cp .env.example .env + # - npm run db:migrate + +# ============================================================================= +# BOTH MODES - Used in both Multi-Agent and Multi-Session +# ============================================================================= + +# Verification commands - Used by Ralph Loop +# Runs when Check Agent stops +# All must pass to allow stop +# Default: [] (empty array) +verify: + - pnpm lint + - pnpm typecheck + # - pnpm test + # - pnpm build +``` + +### Default Values + +| Config | Default | Notes | +| -------------- | -------------- | ----------------------------------------------- | +| `worktree_dir` | `../worktrees` | Relative to project root | +| `copy` | `[]` | Empty array, no files copied | +| `post_create` | `[]` | Empty array, no commands run | +| `verify` | `[]` | Empty array, Ralph Loop uses completion markers | + +--- + +## Scenario: Multi-Agent in Current Directory + +**Requirement**: Run dispatch → implement → check in current directory, no worktree + +**worktree.yaml config**: + +```yaml +# These can be omitted (not used in current directory mode) +# worktree_dir: ... +# copy: ... +# post_create: ... + +# This is needed! Ralph Loop uses it +verify: + - pnpm lint + - pnpm typecheck +``` + +**Workflow**: + +1. Set `.trellis/.current-task` +2. Call `Task(subagent_type="implement")` +3. Call `Task(subagent_type="check")` +4. When Check Agent completes, Ralph Loop runs `verify` commands +5. Human commits + +--- + +## Scenario: Custom Workflows + +### Add test verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test # Add tests +``` + +### Add build verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm build # Add build check +``` + +### Go projects + +```yaml +verify: + - go fmt ./... + - go vet ./... + - go test ./... +``` + +### Python projects + +```yaml +verify: + - ruff check . + - mypy . + - pytest +``` + +### Rust projects + +```yaml +verify: + - cargo fmt --check + - cargo clippy + - cargo test +``` + +--- + +## Scenario: Custom Worktree Creation + +### Different package managers + +```yaml +post_create: + # npm + - npm install + + # or pnpm + # - pnpm install --frozen-lockfile + + # or yarn + # - yarn install --frozen-lockfile + + # or bun + # - bun install +``` + +### Database migrations required + +```yaml +post_create: + - pnpm install + - pnpm db:migrate + - pnpm db:seed +``` + +### Code generation required + +```yaml +post_create: + - pnpm install + - pnpm codegen + - pnpm prisma generate +``` + +### Copy additional files + +```yaml +copy: + - .trellis/.developer + - .env + - .env.local + - .npmrc # npm private registry config + - firebase-credentials.json # Firebase credentials + - google-cloud-key.json # GCP credentials +``` + +--- + +## When worktree.yaml is Missing + +If `worktree.yaml` doesn't exist: + +| Feature | Behavior | +| ------------- | ------------------------------------------------ | +| Multi-Session | ❌ Cannot start (start.py requires config) | +| Multi-Agent | ⚠️ Works, but Ralph Loop uses completion markers | + +**Ralph Loop fallback behavior**: + +- Without `verify` config, uses completion markers +- Generates markers from `check.jsonl` reason field +- Example: `{"reason": "typecheck"}` → expects `TYPECHECK_FINISH` + +--- + +## Minimal Configuration + +### Multi-Agent only (current directory) + +```yaml +# .trellis/worktree.yaml +verify: + - pnpm lint + - pnpm typecheck +``` + +### Multi-Session only (worktree) + +```yaml +# .trellis/worktree.yaml +worktree_dir: ../worktrees +copy: + - .trellis/.developer +post_create: + - npm install +verify: + - pnpm lint + - pnpm typecheck +``` + +--- + +## Complete Examples + +### Node.js/TypeScript Project + +```yaml +worktree_dir: ../worktrees + +copy: + - .trellis/.developer + - .env + - .env.local + +post_create: + - pnpm install --frozen-lockfile + +verify: + - pnpm lint + - pnpm typecheck + - pnpm test +``` + +### Python Project + +```yaml +worktree_dir: ../worktrees + +copy: + - .trellis/.developer + - .env + - venv/ # or recreate venv + +post_create: + - python -m venv venv + - ./venv/bin/pip install -r requirements.txt + +verify: + - ./venv/bin/ruff check . + - ./venv/bin/mypy . + - ./venv/bin/pytest +``` + +### Go Project + +```yaml +worktree_dir: ../worktrees + +copy: + - .trellis/.developer + - .env + +post_create: + - go mod download + +verify: + - go fmt ./... + - go vet ./... + - golangci-lint run + - go test ./... +``` + +### Monorepo Project + +```yaml +worktree_dir: ../worktrees + +copy: + - .trellis/.developer + - .env + - .npmrc + +post_create: + - pnpm install --frozen-lockfile + - pnpm -r build # Build all packages + +verify: + - pnpm -r lint + - pnpm -r typecheck + - pnpm -r test +``` + +--- + +## Verification Command Notes + +### Ralph Loop Constants + +| Constant | Value | Description | +| ----------------------- | ----- | -------------------------- | +| `MAX_ITERATIONS` | 5 | Maximum loop iterations | +| `STATE_TIMEOUT_MINUTES` | 30 | State timeout (minutes) | +| Command timeout | 120s | Per verify command timeout | + +### Timeout + +Each verify command has **120 seconds** (2 minutes) timeout. Long-running tests may need: + +- Split tests +- Run only fast tests +- Modify `COMMAND_TIMEOUT` constant in `ralph-loop.py` + +### Exit Codes + +- Exit code 0 = Pass +- Non-zero = Fail, blocks Check Agent from stopping + +### Order + +Commands run in config order, stops on first failure. + +Recommended order: fast → slow + +```yaml +verify: + - pnpm lint # Fast (seconds) + - pnpm typecheck # Medium (seconds-minutes) + - pnpm test # Slow (minutes) +``` + +--- + +## YAML Parser Notes + +Trellis uses a custom YAML parser (not PyYAML) with these limitations: + +### Supported Syntax + +```yaml +# Simple key-value +worktree_dir: ../worktrees + +# Arrays (2-space indent, starts with -) +copy: + - .trellis/.developer + - .env + +# Quoted values +worktree_dir: "../worktrees with spaces" +``` + +### Unsupported Syntax + +```yaml +# ❌ Inline arrays +copy: [.env, .npmrc] + +# ❌ Complex nesting +nested: + key: + subkey: value + +# ❌ Multi-line strings +description: | + Multiple + lines +``` + +--- + +## Debugging Configuration + +### View current config + +```bash +cat .trellis/worktree.yaml +``` + +### Test verify commands + +```bash +# Manual run +pnpm lint && pnpm typecheck + +# Or view Ralph Loop state +cat .trellis/.ralph-state.json +``` + +### View worktree status + +```bash +git worktree list +``` + +### Ralph Loop debugging + +```bash +# View state file +cat .trellis/.ralph-state.json + +# Example output +# { +# "task": ".trellis/tasks/01-31-add-login", +# "iteration": 2, +# "started_at": "2026-01-31T10:30:00" +# } + +# Ralph Loop auto-stops when exceeding MAX_ITERATIONS (5) or STATE_TIMEOUT_MINUTES (30) +``` diff --git a/Skills/new-skills/trellis-meta/references/core/files.md b/Skills/new-skills/trellis-meta/references/core/files.md new file mode 100644 index 0000000..3020d9d --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/files.md @@ -0,0 +1,382 @@ +# Trellis File Reference + +Complete reference of all files in the `.trellis/` directory. + +--- + +## Directory Structure + +``` +.trellis/ +├── .developer # Developer identity (gitignored) +├── .current-task # Active task pointer (gitignored) +├── .ralph-state.json # Ralph Loop state (gitignored) +├── .template-hashes.json # Template version tracking +├── .version # Installed Trellis version +├── .gitignore # Git ignore rules +├── workflow.md # Main workflow documentation +├── config.yaml # Project-level configuration +├── worktree.yaml # Multi-session configuration +│ +├── workspace/ # Developer workspaces +├── tasks/ # Task tracking +├── spec/ # Coding guidelines +└── scripts/ # Automation scripts +``` + +--- + +## Root Files + +### `.developer` + +**Purpose**: Store current developer identity. + +**Created by**: `init_developer.py` + +**Format**: Plain text, single line with developer name. + +``` +taosu +``` + +**Gitignored**: Yes - each machine has its own identity. + +--- + +### `.current-task` + +**Purpose**: Point to the active task directory. + +**Created by**: `task.py start ` + +**Format**: Plain text, relative path to task directory. + +``` +.trellis/tasks/01-31-add-login-taosu +``` + +**Gitignored**: Yes - each developer works on different tasks. + +**Used by**: + +- Hooks read this to find task context +- Scripts use this for current task operations + +--- + +### `.ralph-state.json` + +**Purpose**: Track Ralph Loop iteration state. + +**Created by**: `ralph-loop.py` (Claude Code only) + +**Format**: JSON + +```json +{ + "task": ".trellis/tasks/01-31-add-login", + "iteration": 2, + "started_at": "2026-01-31T10:30:00" +} +``` + +**Gitignored**: Yes - runtime state. + +**Fields**: + +| Field | Type | Description | +| ------------ | -------- | ----------------------- | +| `task` | string | Task directory path | +| `iteration` | number | Current iteration (1-5) | +| `started_at` | ISO date | When loop started | + +--- + +### `.template-hashes.json` + +**Purpose**: Track template file versions for `trellis update`. + +**Created by**: `trellis init` or `trellis update` + +**Format**: JSON object mapping file paths to SHA-256 hashes. + +```json +{ + ".trellis/workflow.md": "028891d1fe839a266...", + ".claude/hooks/session-start.py": "0a9899e80f6bfe15...", + ".claude/commands/start.md": "d1276dcbff880299..." +} +``` + +**Used by**: + +- `trellis update` - Detect which files have been modified +- Determines if files can be auto-updated or need conflict resolution + +**Behavior**: + +- File hash matches template → Safe to update +- File hash differs → User modified, needs manual merge + +--- + +### `.version` + +**Purpose**: Track installed Trellis CLI version. + +**Created by**: `trellis init` or `trellis update` + +**Format**: Plain text, semver version string. + +``` +0.3.0 +``` + +**Used by**: + +- `trellis update` - Determine if update is needed +- Version mismatch detection + +--- + +### `.gitignore` + +**Purpose**: Define which files to exclude from git. + +**Default content**: + +```gitignore +# Developer identity (local only) +.developer + +# Current task pointer +.current-task + +# Ralph Loop state +.ralph-state.json + +# Agent runtime files +.agents/ +.agent-log +.agent-runner.sh +.session-id + +# Task directory runtime files +.plan-log + +# Atomic update temp files +*.tmp +.backup-* +*.new + +# Python cache +**/__pycache__/ +**/*.pyc +``` + +--- + +### `workflow.md` + +**Purpose**: Main workflow documentation for developers and AI. + +**Created by**: `trellis init` + +**Content sections**: + +1. Quick Start guide +2. Workflow overview +3. Session start process +4. Development process +5. Session end +6. File descriptions +7. Best practices + +**Injected by**: `session-start.py` hook (Claude Code) + +**For Cursor**: Read manually at session start. + +--- + +### `config.yaml` + +**Purpose**: Project-level Trellis configuration. + +**Created by**: `trellis init` + +**Format**: YAML + +```yaml +# Commit message used when auto-committing journal/index changes +session_commit_message: 'chore: record journal' + +# Maximum lines per journal file before rotating to a new one +max_journal_lines: 2000 +``` + +**Used by**: `common/config.py` (read by `add_session.py`) + +**Behavior**: All values have sensible hardcoded defaults. If config.yaml is missing or a key is absent, the default is used. + +--- + +### `worktree.yaml` + +**Purpose**: Configure Multi-Session and Ralph Loop. + +**Created by**: `trellis init` + +**Format**: YAML + +```yaml +worktree_dir: ../worktrees +copy: + - .trellis/.developer + - .env +post_create: + - npm install +verify: + - pnpm lint + - pnpm typecheck +``` + +→ See `claude-code/worktree-config.md` for details. + +--- + +## Runtime Files (Gitignored) + +### `.agents/` + +**Purpose**: Agent registry for Multi-Session. + +**Location**: `.trellis/workspace/{developer}/.agents/` + +**Content**: `registry.json` tracking running agents. + +--- + +### `.session-id` + +**Purpose**: Store Claude Code session ID for resume. + +**Created by**: Multi-Session `start.py` + +**Format**: UUID string. + +--- + +### `.agent-log` + +**Purpose**: Agent execution log. + +**Created by**: Multi-Session scripts. + +--- + +### `.plan-log` + +**Purpose**: Plan Agent execution log. + +**Location**: Task directory. + +--- + +## Directories + +### `workspace/` + +Developer workspaces with journals and indexes. + +→ See `core/workspace.md` + +### `tasks/` + +Task directories with PRDs and context files. + +→ See `core/tasks.md` + +### `spec/` + +Coding guidelines and specifications. + +→ See `core/specs.md` + +### `scripts/` + +Automation scripts. + +→ See `core/scripts.md` and `claude-code/scripts.md` + +--- + +## Template Files + +These files are managed by `trellis update`: + +| File | Purpose | +| ------------------------ | ------------------------ | +| `.trellis/workflow.md` | Workflow documentation | +| `.trellis/config.yaml` | Project-level config | +| `.trellis/worktree.yaml` | Multi-session config | +| `.trellis/.gitignore` | Git ignore rules | +| `.claude/hooks/*.py` | Hook scripts | +| `.claude/commands/*.md` | Slash commands | +| `.claude/agents/*.md` | Agent definitions | +| `.cursor/commands/*.md` | Cursor commands (mirror) | + +**Update behavior**: + +1. Compare file hash with `.template-hashes.json` +2. If unchanged → Auto-update +3. If modified → Create `.new` file for manual merge +4. Update hashes after successful update + +--- + +## File Lifecycle + +### Created by `trellis init` + +``` +.trellis/ +├── .template-hashes.json +├── .version +├── .gitignore +├── workflow.md +├── config.yaml +├── worktree.yaml +├── spec/ +│ ├── frontend/ +│ ├── backend/ +│ └── guides/ +└── scripts/ +``` + +### Created at runtime + +``` +.trellis/ +├── .developer # init_developer.py +├── .current-task # task.py start +├── .ralph-state.json # ralph-loop.py +├── workspace/{dev}/ # init_developer.py +│ ├── index.md +│ ├── journal-1.md +│ └── .agents/ +└── tasks/{task}/ # task.py create + ├── task.json + ├── prd.md + └── *.jsonl +``` + +### Cleaned up + +``` +# After task completion +.trellis/tasks/{task}/ → .trellis/tasks/archive/YYYY-MM/ + +# After worktree removal +.agents/registry.json entries removed +``` diff --git a/Skills/new-skills/trellis-meta/references/core/overview.md b/Skills/new-skills/trellis-meta/references/core/overview.md new file mode 100644 index 0000000..ca03a68 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/overview.md @@ -0,0 +1,74 @@ +# Core Systems Overview + +These systems work on **all 9 platforms** (Claude Code, Cursor, OpenCode, iFlow, Codex, Kilo, Kiro, Gemini CLI, Antigravity). + +--- + +## What's in Core? + +| System | Purpose | Files | +| --------- | -------------------------- | --------------------------------- | +| Workspace | Session tracking, journals | `.trellis/workspace/` | +| Tasks | Work item tracking | `.trellis/tasks/` | +| Specs | Coding guidelines | `.trellis/spec/` | +| Commands | Slash command prompts | `.claude/commands/` | +| Scripts | Automation utilities | `.trellis/scripts/` (core subset) | + +--- + +## Why These Are Portable + +All core systems are **file-based**: + +- No special runtime required +- Read/write with any tool +- Works in any AI coding environment + +``` +┌─────────────────────────────────────────────────────────────┐ +│ CORE SYSTEMS (File-Based) │ +│ │ +│ .trellis/ │ +│ ├── workspace/ → Journals, session history │ +│ ├── tasks/ → Task directories, PRDs, context files │ +│ ├── spec/ → Coding guidelines │ +│ └── scripts/ → Python utilities (core subset) │ +│ │ +│ .claude/ │ +│ └── commands/ → Slash command prompts │ +│ │ +└─────────────────────────────────────────────────────────────┘ +``` + +--- + +## Platform Usage + +### Claude Code + +All core systems work automatically with hook integration. + +### iFlow + +All core systems work automatically with hook integration (same as Claude Code). + +### Cursor, OpenCode, Codex, Kilo, Kiro, Gemini CLI, Antigravity + +Read files manually at session start: + +1. Read `.trellis/workflow.md` +2. Read relevant specs from `.trellis/spec/` +3. Check `.trellis/.current-task` for active work +4. Read JSONL files for context + +--- + +## Documents in This Directory + +| Document | Content | +| -------------- | ---------------------------------------------- | +| `files.md` | All files in `.trellis/` with purposes | +| `workspace.md` | Workspace system, journals, developer identity | +| `tasks.md` | Task system, directories, JSONL context files | +| `specs.md` | Spec system, guidelines organization | +| `scripts.md` | Core scripts (platform-independent) | diff --git a/Skills/new-skills/trellis-meta/references/core/scripts.md b/Skills/new-skills/trellis-meta/references/core/scripts.md new file mode 100644 index 0000000..98f8bdc --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/scripts.md @@ -0,0 +1,308 @@ +# Core Scripts + +Platform-independent Python scripts for Trellis automation. + +--- + +## Overview + +These scripts work on all platforms - they only read/write files and don't require Claude Code's hook system. + +``` +.trellis/scripts/ +├── common/ # Shared utilities +│ ├── paths.py +│ ├── developer.py +│ ├── config.py +│ ├── task_utils.py +│ ├── phase.py +│ └── git_context.py +│ +├── init_developer.py # Initialize developer +├── get_developer.py # Get developer name +├── get_context.py # Get session context +├── task.py # Task management CLI +└── add_session.py # Record session +``` + +--- + +## Developer Scripts + +### `init_developer.py` + +Initialize developer identity. + +```bash +python3 .trellis/scripts/init_developer.py +``` + +**Creates:** + +- `.trellis/.developer` +- `.trellis/workspace//` +- `.trellis/workspace//index.md` +- `.trellis/workspace//journal-1.md` + +--- + +### `get_developer.py` + +Get current developer name. + +```bash +python3 .trellis/scripts/get_developer.py +# Output: taosu +``` + +**Exit codes:** + +- `0` - Success +- `1` - Not initialized + +--- + +## Context Scripts + +### `get_context.py` + +Get session context for AI consumption. + +```bash +python3 .trellis/scripts/get_context.py +``` + +**Output includes:** + +- Developer identity +- Git status and recent commits +- Current task (if any) +- Workspace summary + +--- + +### `add_session.py` + +Record session entry to journal. + +```bash +python3 .trellis/scripts/add_session.py \ + --title "Session Title" \ + --commit "hash1,hash2" \ + --summary "Brief summary" +``` + +**Options:** + +- `--title` - Session title (required) +- `--commit` - Comma-separated commit hashes +- `--summary` - Brief summary +- `--content-file` - Path to file with detailed content +- `--no-commit` - Skip auto-commit of workspace changes + +**Actions:** + +1. Appends to current journal +2. Updates index markers +3. Rotates journal if >max_journal_lines +4. Auto-commits `.trellis/workspace` changes (unless `--no-commit`) + +--- + +## Task Scripts + +### `task.py` + +Task management CLI. + +#### Create Task + +```bash +python3 .trellis/scripts/task.py create "Task name" --slug task-slug +``` + +**Options:** + +- `--slug` - URL-safe identifier +- `--assignee` - Developer name (default: current) +- `--type` - Dev type: frontend, backend, fullstack + +#### List Tasks + +```bash +python3 .trellis/scripts/task.py list +``` + +**Output:** + +``` +Active Tasks: + 01-31-add-login-taosu (active) + 01-30-fix-api-cursor-agent (paused) +``` + +#### Start Task + +```bash +python3 .trellis/scripts/task.py start +``` + +Sets `.trellis/.current-task` to the task directory. + +#### Stop Task + +```bash +python3 .trellis/scripts/task.py stop +``` + +Clears `.trellis/.current-task`. + +#### Initialize Context + +```bash +python3 .trellis/scripts/task.py init-context +``` + +**Dev types:** `frontend`, `backend`, `fullstack` + +Creates JSONL files with appropriate spec references. + +#### Set Branch + +```bash +python3 .trellis/scripts/task.py set-branch +``` + +Updates `branch` field in task.json. + +#### Archive Task + +```bash +python3 .trellis/scripts/task.py archive +``` + +Moves task to `.trellis/tasks/archive/YYYY-MM/`. + +#### List Archive + +```bash +python3 .trellis/scripts/task.py list-archive [month] +``` + +--- + +## Common Utilities + +### `common/paths.py` + +Path constants and utilities. + +```python +from common.paths import ( + TRELLIS_DIR, # .trellis/ + WORKSPACE_DIR, # .trellis/workspace/ + TASKS_DIR, # .trellis/tasks/ + SPEC_DIR, # .trellis/spec/ +) +``` + +### `common/developer.py` + +Developer management. + +```python +from common.developer import ( + get_developer, # Get current developer name + get_workspace_dir, # Get developer's workspace directory +) +``` + +### `common/task_utils.py` + +Task lookup functions. + +```python +from common.task_utils import ( + get_current_task, # Get current task directory + load_task_json, # Load task.json + save_task_json, # Save task.json +) +``` + +### `common/phase.py` + +Phase tracking. + +```python +from common.phase import ( + get_current_phase, # Get current phase number + advance_phase, # Move to next phase +) +``` + +### `common/config.py` + +Project-level configuration reader. + +```python +from common.config import ( + get_session_commit_message, # Commit message for auto-commit + get_max_journal_lines, # Max lines per journal file +) +``` + +Reads from `.trellis/config.yaml` with hardcoded fallback defaults. + +### `common/git_context.py` + +Git context generation. + +```python +from common.git_context import ( + get_git_status, # Get git status + get_recent_commits, # Get recent commit messages + get_branch_name, # Get current branch +) +``` + +--- + +## Usage Examples + +### Initialize New Developer + +```bash +cd /path/to/project +python3 .trellis/scripts/init_developer.py john-doe +``` + +### Create and Start Task + +```bash +# Create task +python3 .trellis/scripts/task.py create "Add user login" --slug add-login + +# Initialize context for fullstack work +python3 .trellis/scripts/task.py init-context \ + .trellis/tasks/01-31-add-login-john-doe fullstack + +# Start task +python3 .trellis/scripts/task.py start \ + .trellis/tasks/01-31-add-login-john-doe +``` + +### Record Session + +```bash +python3 .trellis/scripts/add_session.py \ + --title "Implement login form" \ + --commit "abc1234" \ + --summary "Added login form, pending API integration" +``` + +### Archive Completed Task + +```bash +python3 .trellis/scripts/task.py archive \ + .trellis/tasks/01-31-add-login-john-doe +``` diff --git a/Skills/new-skills/trellis-meta/references/core/specs.md b/Skills/new-skills/trellis-meta/references/core/specs.md new file mode 100644 index 0000000..4a8f394 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/specs.md @@ -0,0 +1,224 @@ +# Spec System + +Maintain coding standards that guide AI development. + +--- + +## Directory Structure + +``` +.trellis/spec/ +├── frontend/ # Frontend guidelines +│ ├── index.md # Overview and quick reference +│ ├── component-guidelines.md +│ ├── hook-guidelines.md +│ ├── state-management.md +│ └── ... +│ +├── backend/ # Backend guidelines +│ ├── index.md +│ ├── directory-structure.md +│ ├── error-handling.md +│ ├── api-patterns.md +│ └── ... +│ +└── guides/ # Thinking guides + ├── index.md + ├── cross-layer-thinking-guide.md + ├── code-reuse-thinking-guide.md + └── cross-platform-thinking-guide.md +``` + +--- + +## Spec Categories + +### Frontend (`frontend/`) + +UI and client-side patterns: + +- Component structure +- React hooks usage +- State management +- Styling conventions +- Accessibility + +### Backend (`backend/`) + +Server-side patterns: + +- Directory structure +- API design +- Error handling +- Database access +- Security + +### Guides (`guides/`) + +Cross-cutting thinking guides: + +- How to think about cross-layer changes +- Code reuse strategies +- Platform considerations + +--- + +## Index Files + +Each category has an `index.md` that: + +1. Provides category overview +2. Lists all specs in the category +3. Gives quick reference for common patterns + +### Example: `frontend/index.md` + +```markdown +# Frontend Specifications + +## Quick Reference + +| Topic | Guideline | +| ---------- | -------------------------------- | +| Components | Functional components only | +| State | Use React Query for server state | +| Styling | Tailwind CSS | + +## Specifications + +1. [Component Guidelines](./component-guidelines.md) +2. [Hook Guidelines](./hook-guidelines.md) +3. [State Management](./state-management.md) +``` + +--- + +## Spec File Format + +````markdown +# [Spec Title] + +## Overview + +Brief description of what this spec covers. + +## Guidelines + +### 1. [Guideline Name] + +Detailed explanation... + +**Do:** + +```typescript +// Good example +``` +```` + +**Don't:** + +```typescript +// Bad example +``` + +### 2. [Another Guideline] + +... + +## Related Specs + +- [Related Spec 1](./related-spec.md) + +```` + +--- + +## Using Specs + +### In JSONL Context Files + +Reference specs in task context: + +```jsonl +{"file": ".trellis/spec/frontend/index.md", "reason": "Frontend overview"} +{"file": ".trellis/spec/frontend/component-guidelines.md", "reason": "Component patterns"} +```` + +### Manual Reading (Cursor) + +Read specs at session start: + +``` +1. Read .trellis/spec/{category}/index.md +2. Read specific guidelines as needed +3. Follow patterns in your code +``` + +--- + +## Creating New Specs + +### 1. Choose Category + +- Frontend UI patterns → `frontend/` +- Backend/API patterns → `backend/` +- Cross-cutting guides → `guides/` + +### 2. Create Spec File + +```bash +touch .trellis/spec/frontend/new-pattern.md +``` + +### 3. Follow Format + +Use the spec file format above. + +### 4. Update Index + +Add to category's `index.md`: + +```markdown +## Specifications + +... +N. [New Pattern](./new-pattern.md) +``` + +### 5. Reference in JSONL + +Add to relevant task context files. + +--- + +## Adding New Categories + +### 1. Create Directory + +```bash +mkdir .trellis/spec/mobile +``` + +### 2. Create Index + +```bash +touch .trellis/spec/mobile/index.md +``` + +### 3. Add Category Specs + +Create individual spec files. + +### 4. Update Task Templates + +Ensure new category is available in JSONL templates. + +--- + +## Best Practices + +1. **Keep specs focused** - One topic per file +2. **Use examples** - Show do/don't patterns +3. **Link related specs** - Cross-reference +4. **Update regularly** - Specs evolve with codebase +5. **Index everything** - Keep index files current diff --git a/Skills/new-skills/trellis-meta/references/core/tasks.md b/Skills/new-skills/trellis-meta/references/core/tasks.md new file mode 100644 index 0000000..b460ba3 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/tasks.md @@ -0,0 +1,223 @@ +# Task System + +Track work items with phase-based execution. + +--- + +## Directory Structure + +``` +.trellis/tasks/ +├── {MM-DD-slug-assignee}/ # Active task directories +│ ├── task.json # Metadata, phases, branch +│ ├── prd.md # Requirements document +│ ├── info.md # Additional context (optional) +│ ├── implement.jsonl # Context for implement phase +│ ├── check.jsonl # Context for check phase +│ └── debug.jsonl # Context for debug phase +│ +└── archive/ # Completed tasks + └── {YYYY-MM}/ + └── {task-dir}/ +``` + +--- + +## Task Directory Naming + +Format: `{MM-DD}-{slug}-{assignee}` + +Examples: + +- `01-31-add-login-taosu` +- `02-01-fix-api-bug-cursor-agent` + +--- + +## task.json + +Task metadata and workflow configuration. + +```json +{ + "name": "Add user login", + "slug": "add-login", + "created": "2026-01-31T10:30:00", + "assignee": "taosu", + "status": "active", + "dev_type": "fullstack", + "scope": ["frontend", "backend"], + "branch": "feature/add-login", + "base_branch": "main", + "current_phase": 1, + "next_action": [ + { "phase": 1, "action": "implement" }, + { "phase": 2, "action": "check" }, + { "phase": 3, "action": "finish" } + ] +} +``` + +### Fields + +| Field | Type | Description | +| --------------- | -------- | ---------------------------------- | +| `name` | string | Human-readable task name | +| `slug` | string | URL-safe identifier | +| `created` | ISO date | Creation timestamp | +| `assignee` | string | Developer name | +| `status` | string | `active`, `paused`, `completed` | +| `dev_type` | string | `frontend`, `backend`, `fullstack` | +| `scope` | array | Affected areas | +| `branch` | string | Git branch name | +| `base_branch` | string | Branch to merge into | +| `current_phase` | number | Current workflow phase | +| `next_action` | array | Workflow phases | + +--- + +## prd.md + +Requirements document for the task. + +```markdown +# Add User Login + +## Overview + +Implement user authentication with email/password. + +## Requirements + +1. Login form with email and password fields +2. Form validation +3. API endpoint for authentication +4. Session management + +## Acceptance Criteria + +- [ ] User can log in with valid credentials +- [ ] Error shown for invalid credentials +- [ ] Session persists across page refresh + +## Technical Notes + +- Use existing auth service pattern +- Follow security guidelines in spec +``` + +--- + +## JSONL Context Files + +List files to inject as context for each phase. + +### Format + +```jsonl +{"file": ".trellis/spec/backend/index.md", "reason": "Backend guidelines"} +{"file": "src/services/auth.ts", "reason": "Existing auth service"} +{"file": ".trellis/tasks/01-31-add-login/prd.md", "reason": "Requirements"} +``` + +### Files + +| File | Phase | Purpose | +| ----------------- | --------- | ------------------------------ | +| `implement.jsonl` | implement | Dev specs, patterns to follow | +| `check.jsonl` | check | Quality criteria, review specs | +| `debug.jsonl` | debug | Debug context, error reports | + +--- + +## Current Task Pointer + +### `.trellis/.current-task` + +Points to active task directory. + +``` +.trellis/tasks/01-31-add-login-taosu +``` + +### Set Current Task + +```bash +python3 .trellis/scripts/task.py start +``` + +### Clear Current Task + +```bash +python3 .trellis/scripts/task.py stop +``` + +--- + +## Task CLI + +### Create Task + +```bash +python3 .trellis/scripts/task.py create "Task name" --slug task-slug +``` + +### List Tasks + +```bash +python3 .trellis/scripts/task.py list +``` + +### Start Task + +```bash +python3 .trellis/scripts/task.py start +``` + +### Initialize Context + +```bash +python3 .trellis/scripts/task.py init-context +``` + +Dev types: `frontend`, `backend`, `fullstack` + +### Archive Task + +```bash +python3 .trellis/scripts/task.py archive +``` + +--- + +## Workflow Phases + +Standard phase progression: + +``` +1. implement → Write code +2. check → Review and fix +3. finish → Final verification +4. create-pr → Create pull request (Multi-Session only) +``` + +### Custom Phases + +Modify `next_action` in task.json: + +```json +"next_action": [ + {"phase": 1, "action": "research"}, + {"phase": 2, "action": "implement"}, + {"phase": 3, "action": "check"} +] +``` + +--- + +## Best Practices + +1. **One task at a time** - Use `.current-task` to track focus +2. **Clear PRDs** - Write specific, testable requirements +3. **Relevant context** - Only include needed files in JSONL +4. **Archive completed** - Keep task directory clean diff --git a/Skills/new-skills/trellis-meta/references/core/workspace.md b/Skills/new-skills/trellis-meta/references/core/workspace.md new file mode 100644 index 0000000..5a74232 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/core/workspace.md @@ -0,0 +1,160 @@ +# Workspace System + +Track development progress across sessions with per-developer isolation. + +--- + +## Directory Structure + +``` +.trellis/workspace/ +├── index.md # Global overview +└── {developer}/ # Per-developer directory + ├── index.md # Personal index with @@@auto markers + ├── journal-1.md # Session journal (max 2000 lines) + ├── journal-2.md # Rolls over when limit reached + └── ... +``` + +--- + +## Developer Identity + +### `.trellis/.developer` + +Stores current developer name. Created by `init_developer.py`. + +``` +taosu +``` + +### Initialize Developer + +```bash +python3 .trellis/scripts/init_developer.py +``` + +Creates: + +- `.trellis/.developer` - Identity file +- `.trellis/workspace//` - Personal workspace +- `.trellis/workspace//index.md` - Personal index +- `.trellis/workspace//journal-1.md` - First journal + +--- + +## Journals + +### Purpose + +Track session history, decisions, and context. + +### Format + +```markdown +# Journal 1 + +## Session: 2026-01-31 10:30 + +### Context + +- Working on: [task description] +- Branch: feature/add-login + +### Progress + +- [x] Completed step 1 +- [ ] Working on step 2 + +### Notes + +Key decisions and learnings... + +--- +``` + +### Journal Rotation + +When journal exceeds 2000 lines: + +1. Archive current (append to index) +2. Create new journal-N.md +3. Continue writing + +--- + +## Personal Index + +### `workspace/{developer}/index.md` + +Tracks all sessions and provides quick reference. + +```markdown +# Developer Workspace - taosu + +## Active Work + +- Current task: `.trellis/tasks/01-31-add-login-taosu` +- Branch: feature/add-login + +## Recent Sessions + + + +- 2026-01-31: Implemented login UI +- 2026-01-30: Set up auth service + + +## Journals + +- journal-1.md (lines 1-2000) +- journal-2.md (current) +``` + +### @@@auto Markers + +Scripts use these markers to auto-update sections: + +- `@@@auto-sessions-start/end` - Recent sessions list +- `@@@auto-tasks-start/end` - Task summaries + +--- + +## Global Index + +### `workspace/index.md` + +Overview of all developers and project status. + +```markdown +# Project Workspace + +## Developers + +- taosu - Last active: 2026-01-31 +- cursor-agent - Last active: 2026-01-30 + +## Recent Activity + +... +``` + +--- + +## Scripts + +| Script | Purpose | +| ------------------- | ----------------------------- | +| `init_developer.py` | Initialize developer identity | +| `get_developer.py` | Get current developer name | +| `add_session.py` | Record session to journal | +| `get_context.py` | Get session context for AI | + +--- + +## Best Practices + +1. **One developer per machine** - Identity stored in `.developer` +2. **Regular journaling** - Record decisions and context +3. **Use markers** - Let scripts auto-update indexes +4. **Review journals** - Before starting new sessions diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/add-agent.md b/Skills/new-skills/trellis-meta/references/how-to-modify/add-agent.md new file mode 100644 index 0000000..39b8be0 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/add-agent.md @@ -0,0 +1,224 @@ +# How To: Add Agent + +Add a new agent type like `my-agent`. + +**Platform**: Claude Code only + +--- + +## Files to Modify + +| File | Action | Required | +| ------------------------------------------ | ------ | --------------------- | +| `.claude/agents/my-agent.md` | Create | Yes | +| `.claude/hooks/inject-subagent-context.py` | Modify | Yes | +| `.trellis/tasks/{template}/my-agent.jsonl` | Create | Yes | +| `trellis-local/SKILL.md` | Update | Yes | +| `.claude/agents/dispatch.md` | Modify | If adding to pipeline | + +--- + +## Step 1: Create Agent Definition + +Create `.claude/agents/my-agent.md`: + +```markdown +--- +name: my-agent +description: | + What this agent specializes in. + When it should be used. +tools: Read, Write, Edit, Bash, Glob, Grep +model: opus +--- + +# My Agent + +## Core Responsibilities + +1. Primary responsibility +2. Secondary responsibility +3. ... + +## Workflow + +1. First step +2. Second step +3. ... + +## Forbidden Operations + +- Thing 1 (why it's forbidden) +- Thing 2 (why it's forbidden) +- git commit (unless explicitly allowed) + +## Output Format + +What the agent should produce. +``` + +### Agent Definition Fields + +| Field | Required | Description | +| ------------- | -------- | --------------------------- | +| `name` | Yes | Agent identifier | +| `description` | Yes | What the agent does | +| `tools` | Yes | Allowed tools | +| `model` | No | Model to use (opus, sonnet) | + +--- + +## Step 2: Update Hook + +Edit `.claude/hooks/inject-subagent-context.py`: + +### Add Constant + +```python +# Near other agent constants +AGENT_MY_AGENT = "my-agent" + +# Add to list +AGENTS_ALL = (..., AGENT_MY_AGENT) +``` + +### Add Context Function + +```python +def get_my_agent_context(repo_root: str, task_dir: str) -> list: + """Get context for my-agent.""" + context_files = [] + + # Load from JSONL + jsonl_path = os.path.join(task_dir, "my-agent.jsonl") + if os.path.exists(jsonl_path): + context_files.extend(load_jsonl_context(jsonl_path)) + + # Add any additional files + # context_files.append({"file": "...", "reason": "..."}) + + return context_files +``` + +### Add to Main Switch + +```python +elif subagent_type == AGENT_MY_AGENT: + context = get_my_agent_context(repo_root, task_dir) + new_prompt = build_agent_prompt( + agent_name="My Agent", + original_prompt=original_prompt, + context=context + ) +``` + +--- + +## Step 3: Create JSONL Template + +Create context template for task directories. + +**Option A**: Add to `task.py init-context`: + +```python +def init_my_agent_context(task_dir, dev_type): + jsonl_path = os.path.join(task_dir, "my-agent.jsonl") + with open(jsonl_path, "w") as f: + # Add relevant specs + f.write(json.dumps({ + "file": ".trellis/spec/guides/index.md", + "reason": "Thinking guides" + }) + "\n") +``` + +**Option B**: Manually create template: + +```jsonl +{"file": ".trellis/spec/guides/index.md", "reason": "Thinking guides"} +{"file": ".trellis/tasks/{task}/prd.md", "reason": "Requirements"} +``` + +--- + +## Step 4: Add to Pipeline (Optional) + +If the agent should be part of the standard workflow: + +### Update task.json Template + +```json +"next_action": [ + {"phase": 1, "action": "implement"}, + {"phase": 2, "action": "my-agent"}, // Add here + {"phase": 3, "action": "check"}, + {"phase": 4, "action": "finish"} +] +``` + +### Update dispatch.md + +Add handling for the new phase: + +```markdown +## Phase Handling + +... + +### my-agent Phase + +- Call `Task(subagent_type="my-agent")` +- Wait for completion +- Proceed to next phase +``` + +--- + +## Step 5: Document in trellis-local + +Update `.claude/skills/trellis-local/SKILL.md`: + +```markdown +## Agents + +### Added Agents + +#### my-agent + +- **File**: `.claude/agents/my-agent.md` +- **Platform**: [CC] +- **Purpose**: What it does +- **Tools**: Read, Write, Edit, Bash, Glob, Grep +- **Added**: 2026-01-31 +- **Reason**: Why it was added + +### Hooks Changed + +#### inject-subagent-context.py + +- **Change**: Added support for `my-agent` type +- **Lines modified**: XX-YY +- **Date**: 2026-01-31 +``` + +--- + +## Testing + +1. Create a task with my-agent.jsonl +2. Set as current task: `task.py start ` +3. Invoke agent: `Task(subagent_type="my-agent", prompt="Test")` +4. Verify context injection works +5. Verify agent behavior matches definition + +--- + +## Checklist + +- [ ] Agent definition created with proper frontmatter +- [ ] Hook updated with agent constant +- [ ] Hook updated with context function +- [ ] Hook updated with main switch case +- [ ] JSONL template created +- [ ] Added to pipeline (if needed) +- [ ] Documented in trellis-local +- [ ] Tested the agent diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/add-command.md b/Skills/new-skills/trellis-meta/references/how-to-modify/add-command.md new file mode 100644 index 0000000..0a69d12 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/add-command.md @@ -0,0 +1,152 @@ +# How To: Add Slash Command + +Add a new `/trellis:my-command` command. + +**Platform**: All (9 platforms — each has its own command format) + +--- + +## Files to Modify + +| File | Action | Required | +| ---------------------------------------- | ------ | -------- | +| `.claude/commands/trellis/my-command.md` | Create | Yes | +| `.cursor/commands/my-command.md` | Create | Optional | +| `trellis-local/SKILL.md` | Update | Yes | + +--- + +## Step 1: Create Command File + +Create `.claude/commands/trellis/my-command.md`: + +```markdown +--- +name: my-command +description: Short description of what the command does +--- + +# My Command + +## Purpose + +Detailed description of the command's purpose. + +## When to Use + +- Scenario 1 +- Scenario 2 + +## Workflow + +1. First step +2. Second step +3. Third step + +## Output + +What the command produces. +``` + +### Command Name Convention + +- Use kebab-case: `my-command`, not `myCommand` +- Prefix with category if needed: `check-backend`, `before-frontend-dev` + +--- + +## Step 2: Mirror to Other Platforms (Optional) + +Commands are automatically mirrored to configured platforms by `trellis init` and `trellis update`. Each platform uses its own format: + +| Platform | Path | Format | +| ----------- | ------------------------------------------ | -------- | +| Cursor | `.cursor/commands/trellis-my-command.md` | Markdown | +| OpenCode | `.opencode/agents/trellis-my-command.md` | Markdown | +| iFlow | `.iflow/commands/trellis/my-command.md` | Markdown | +| Codex | `.agents/skills/my-command/SKILL.md` | Skill | +| Kilo | `.kilocode/commands/trellis/my-command.md` | Markdown | +| Kiro | `.kiro/skills/my-command/SKILL.md` | Skill | +| Gemini CLI | `.gemini/commands/trellis/my-command.toml` | TOML | +| Antigravity | `.agent/workflows/my-command.md` | Markdown | + +--- + +## Step 3: Document in trellis-local + +Update `.claude/skills/trellis-local/SKILL.md`: + +```markdown +## Commands + +### Added Commands + +#### /trellis:my-command + +- **File**: `.claude/commands/trellis/my-command.md` +- **Platform**: [ALL] +- **Purpose**: What it does +- **Added**: 2026-01-31 +- **Reason**: Why it was added +``` + +--- + +## Examples + +### Simple Command + +```markdown +--- +name: check-types +description: Run TypeScript type checking +--- + +# Check Types + +Run `pnpm typecheck` and report results. + +## Usage + +Run this command after making code changes to verify type safety. +``` + +### Command with Parameters + +Commands can reference user input or context: + +```markdown +--- +name: review-file +description: Review a specific file for code quality +--- + +# Review File + +## Input + +User should specify which file to review. + +## Workflow + +1. Read the specified file +2. Check against relevant specs +3. Report issues found +``` + +--- + +## Testing + +1. Run the command: `/trellis:my-command` +2. Verify behavior matches description +3. Test edge cases + +--- + +## Checklist + +- [ ] Command file created with proper frontmatter +- [ ] Mirrored to Cursor (if needed) +- [ ] Documented in trellis-local +- [ ] Tested the command diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/add-phase.md b/Skills/new-skills/trellis-meta/references/how-to-modify/add-phase.md new file mode 100644 index 0000000..244d09d --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/add-phase.md @@ -0,0 +1,237 @@ +# How To: Add Workflow Phase + +Add a new phase to the task workflow pipeline. + +**Platform**: Claude Code only + +--- + +## Files to Modify + +| File | Action | Required | +| ------------------------------- | ------ | ------------ | +| Task `task.json` | Modify | Yes | +| `.claude/agents/dispatch.md` | Modify | Yes | +| `.claude/agents/{new-agent}.md` | Create | If new agent | +| `inject-subagent-context.py` | Modify | If new agent | +| `trellis-local/SKILL.md` | Update | Yes | + +--- + +## Standard Phases + +Default workflow: + +``` +implement → check → finish → create-pr +``` + +--- + +## Step 1: Update task.json + +Modify the `next_action` array in task.json: + +### Add Phase After Implement + +```json +{ + "next_action": [ + { "phase": 1, "action": "implement" }, + { "phase": 2, "action": "review" }, // New phase + { "phase": 3, "action": "check" }, + { "phase": 4, "action": "finish" }, + { "phase": 5, "action": "create-pr" } + ] +} +``` + +### Add Phase Before Implement + +```json +{ + "next_action": [ + { "phase": 1, "action": "design" }, // New phase + { "phase": 2, "action": "implement" }, + { "phase": 3, "action": "check" }, + { "phase": 4, "action": "finish" } + ] +} +``` + +--- + +## Step 2: Update Dispatch Agent + +Edit `.claude/agents/dispatch.md`: + +### Add Phase Handling + +```markdown +## Phase Handling + +### implement Phase + +...existing... + +### review Phase (NEW) + +- Purpose: Review implementation before check +- Call: `Task(subagent_type="review")` +- Next: Proceed to check phase + +### check Phase + +...existing... +``` + +### Update Workflow Description + +```markdown +## Workflow + +1. Read task.json for next_action +2. Execute phases in order: + - implement: Write code + - review: Review implementation (NEW) + - check: Quality verification + - finish: Final review + - create-pr: Create pull request +``` + +--- + +## Step 3: Create Agent (If New) + +If the phase uses a new agent, create the agent definition. + +→ See `add-agent.md` for full details. + +Quick version: + +```markdown +--- +name: review +description: Review implementation before check phase. +tools: Read, Glob, Grep +--- + +# Review Agent + +## Core Responsibilities + +1. Review code changes +2. Check against requirements +3. Identify issues before check phase + +## Forbidden Operations + +- Writing code (that's implement's job) +- Git operations +``` + +--- + +## Step 4: Update Hook (If New Agent) + +If using a new agent, update `inject-subagent-context.py`: + +```python +AGENT_REVIEW = "review" +AGENTS_ALL = (..., AGENT_REVIEW) + +def get_review_context(repo_root, task_dir): + # Load review.jsonl + ... + +elif subagent_type == AGENT_REVIEW: + context = get_review_context(repo_root, task_dir) + ... +``` + +--- + +## Step 5: Update Task Templates + +Update default task.json creation in `task.py`: + +```python +default_next_action = [ + {"phase": 1, "action": "implement"}, + {"phase": 2, "action": "review"}, # Add new phase + {"phase": 3, "action": "check"}, + {"phase": 4, "action": "finish"}, +] +``` + +--- + +## Step 6: Document in trellis-local + +```markdown +## Workflow Changes + +### Added review Phase + +- **Position**: After implement, before check +- **Agent**: review +- **Purpose**: Review implementation quality +- **Date**: 2026-01-31 +- **Reason**: Catch issues before check phase +``` + +--- + +## Common Phase Patterns + +### Design → Implement → Check + +```json +"next_action": [ + {"phase": 1, "action": "design"}, + {"phase": 2, "action": "implement"}, + {"phase": 3, "action": "check"} +] +``` + +### Implement → Test → Check + +```json +"next_action": [ + {"phase": 1, "action": "implement"}, + {"phase": 2, "action": "test"}, + {"phase": 3, "action": "check"} +] +``` + +### Research → Implement → Check + +```json +"next_action": [ + {"phase": 1, "action": "research"}, + {"phase": 2, "action": "implement"}, + {"phase": 3, "action": "check"} +] +``` + +--- + +## Testing + +1. Create task with new phase in next_action +2. Set as current task +3. Run dispatch agent +4. Verify phases execute in order +5. Verify new phase works correctly + +--- + +## Checklist + +- [ ] task.json updated with new phase +- [ ] dispatch.md updated with phase handling +- [ ] Agent created (if new) +- [ ] Hook updated (if new agent) +- [ ] Task templates updated +- [ ] Documented in trellis-local +- [ ] Tested workflow diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/add-spec.md b/Skills/new-skills/trellis-meta/references/how-to-modify/add-spec.md new file mode 100644 index 0000000..f88fcc7 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/add-spec.md @@ -0,0 +1,212 @@ +# How To: Add Spec Category + +Add a new spec category like `mobile/`. + +**Platform**: All + +--- + +## Files to Modify + +| File | Action | Required | +| ------------------------------- | ------ | -------- | +| `.trellis/spec/mobile/index.md` | Create | Yes | +| `.trellis/spec/mobile/*.md` | Create | Yes | +| Task JSONL templates | Update | Yes | +| `trellis-local/SKILL.md` | Update | Yes | + +--- + +## Step 1: Create Category Directory + +```bash +mkdir -p .trellis/spec/mobile +``` + +--- + +## Step 2: Create Index File + +Create `.trellis/spec/mobile/index.md`: + +```markdown +# Mobile Specifications + +Guidelines for mobile development. + +## Quick Reference + +| Topic | Guideline | +| ------------ | ------------------ | +| Architecture | MVVM pattern | +| State | Use StateFlow | +| Navigation | Jetpack Navigation | + +## Specifications + +1. [Architecture Guidelines](./architecture.md) +2. [UI Guidelines](./ui-guidelines.md) +3. [State Management](./state-management.md) + +## Key Principles + +- Principle 1 +- Principle 2 +- Principle 3 +``` + +--- + +## Step 3: Create Spec Files + +Create individual spec files in the category: + +### Example: `architecture.md` + +````markdown +# Mobile Architecture + +## Overview + +Description of architecture approach. + +## Guidelines + +### 1. Use MVVM Pattern + +Explanation... + +**Do:** + +```kotlin +// Good example +``` +```` + +**Don't:** + +```kotlin +// Bad example +``` + +### 2. Another Guideline + +... + +## Related Specs + +- [UI Guidelines](./ui-guidelines.md) + +```` + +--- + +## Step 4: Update JSONL Templates + +Add the new specs to relevant JSONL templates. + +### Option A: Update task.py + +Modify `init-context` to include mobile specs: + +```python +def init_mobile_context(task_dir): + jsonl_path = os.path.join(task_dir, "implement.jsonl") + with open(jsonl_path, "a") as f: + f.write(json.dumps({ + "file": ".trellis/spec/mobile/index.md", + "reason": "Mobile guidelines" + }) + "\n") +```` + +### Option B: Add to Existing Templates + +Edit existing JSONL files: + +```jsonl +{"file": ".trellis/spec/mobile/index.md", "reason": "Mobile guidelines"} +{"file": ".trellis/spec/mobile/architecture.md", "reason": "Architecture patterns"} +``` + +--- + +## Step 5: Document in trellis-local + +Update `.claude/skills/trellis-local/SKILL.md`: + +```markdown +## Specs Customized + +### Added Categories + +#### mobile/ + +- **Path**: `.trellis/spec/mobile/` +- **Purpose**: Mobile development guidelines +- **Added**: 2026-01-31 +- **Files**: + - `index.md` - Overview + - `architecture.md` - Architecture patterns + - `ui-guidelines.md` - UI patterns +``` + +--- + +## Spec File Best Practices + +### Structure + +```markdown +# [Spec Title] + +## Overview + +Brief description. + +## Guidelines + +### 1. [Guideline Name] + +Explanation with examples. + +### 2. [Another Guideline] + +... + +## Related Specs + +Links to related specs. +``` + +### Naming + +- Use kebab-case: `ui-guidelines.md` +- Be descriptive: `state-management.md` not `state.md` + +### Cross-References + +Link between specs: + +```markdown +See [State Management](./state-management.md) for more details. +``` + +--- + +## Testing + +1. Verify index links work +2. Create a task with the new specs in JSONL +3. Verify specs are injected correctly (Claude Code) +4. Verify specs are readable (Cursor) + +--- + +## Checklist + +- [ ] Category directory created +- [ ] Index file created with overview +- [ ] Spec files created with proper format +- [ ] JSONL templates updated +- [ ] Documented in trellis-local +- [ ] Cross-references verified diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/change-verify.md b/Skills/new-skills/trellis-meta/references/how-to-modify/change-verify.md new file mode 100644 index 0000000..5df200a --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/change-verify.md @@ -0,0 +1,180 @@ +# How To: Change Verify Commands + +Add or modify Ralph Loop verification commands. + +**Platform**: Claude Code only (Ralph Loop) + +--- + +## Files to Modify + +| File | Action | Required | +| ------------------------ | ------ | -------- | +| `.trellis/worktree.yaml` | Modify | Yes | + +--- + +## Step 1: Edit worktree.yaml + +Open `.trellis/worktree.yaml` and modify the `verify` section: + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test # Add this +``` + +--- + +## Common Scenarios + +### Add Test Verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test +``` + +### Add Build Verification + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm build +``` + +### Add Specific Test Suite + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test:unit # Fast unit tests only +``` + +### Different Languages + +**Go:** + +```yaml +verify: + - go fmt ./... + - go vet ./... + - golangci-lint run + - go test ./... +``` + +**Python:** + +```yaml +verify: + - ruff check . + - mypy . + - pytest -x +``` + +**Rust:** + +```yaml +verify: + - cargo fmt --check + - cargo clippy + - cargo test +``` + +--- + +## Execution Details + +### Order + +Commands run in order. First failure stops execution. + +**Recommended order**: fast → slow + +```yaml +verify: + - pnpm lint # ~2 seconds + - pnpm typecheck # ~10 seconds + - pnpm test:unit # ~30 seconds + - pnpm build # ~60 seconds +``` + +### Timeout + +Each command has 120 second timeout. + +For long-running commands: + +- Split into smaller chunks +- Use faster subset for Ralph Loop +- Run full suite manually + +### Exit Codes + +- Exit 0 = Pass +- Non-zero = Fail, agent continues + +--- + +## Testing + +### Manual Test + +```bash +# Run commands manually +pnpm lint && pnpm typecheck && pnpm test + +# Should all pass for Ralph Loop to allow stop +``` + +### Integration Test + +1. Make a change that fails linting +2. Run check agent +3. Verify Ralph Loop blocks and shows error +4. Fix the issue +5. Verify Ralph Loop allows stop + +--- + +## Troubleshooting + +### Command Not Found + +Ensure command is available: + +```bash +which pnpm # or npm, yarn, etc. +``` + +### Timeout Issues + +Increase timeout in `ralph-loop.py`: + +```python +COMMAND_TIMEOUT = 180 # Default is 120 +``` + +### Skip Verify Temporarily + +Comment out commands: + +```yaml +verify: + - pnpm lint + # - pnpm typecheck # Skip temporarily +``` + +--- + +## Checklist + +- [ ] Commands added to worktree.yaml +- [ ] Commands tested manually +- [ ] Order is fast → slow +- [ ] No timeout issues diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/modify-hook.md b/Skills/new-skills/trellis-meta/references/how-to-modify/modify-hook.md new file mode 100644 index 0000000..ca2181d --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/modify-hook.md @@ -0,0 +1,270 @@ +# How To: Modify Hook + +Change hook behavior for context injection or validation. + +**Platform**: Claude Code only + +--- + +## Files to Modify + +| File | Action | Required | +| ------------------------- | ------ | --------------------------- | +| `.claude/hooks/{hook}.py` | Modify | Yes | +| `.claude/settings.json` | Modify | If changing matcher/timeout | +| `trellis-local/SKILL.md` | Update | Yes | + +--- + +## Hook Types + +| Hook | File | Purpose | +| ------------------ | ---------------------------- | ---------------------- | +| SessionStart | `session-start.py` | Inject initial context | +| PreToolUse:Task | `inject-subagent-context.py` | Inject agent context | +| SubagentStop:check | `ralph-loop.py` | Quality enforcement | + +--- + +## Step 1: Understand Hook Structure + +### Input (stdin) + +Hooks receive JSON input: + +```json +{ + "hook_event": "PreToolUse", + "tool_name": "Task", + "tool_input": { + "subagent_type": "implement", + "prompt": "..." + } +} +``` + +### Output (stdout) + +Hooks output JSON: + +```json +{ + "result": "continue", + "message": "Optional message to inject", + "updatedInput": { + "prompt": "Modified prompt..." + } +} +``` + +### Result Types + +| Result | Effect | +| ---------- | ---------------------------------- | +| `continue` | Allow operation, optionally modify | +| `block` | Prevent operation | + +--- + +## Step 2: Modify Hook Logic + +### Example: Add Context to Session Start + +Edit `.claude/hooks/session-start.py`: + +```python +def get_additional_context(): + """Add custom context.""" + context = [] + + # Add custom file + custom_path = os.path.join(repo_root, ".trellis/custom.md") + if os.path.exists(custom_path): + with open(custom_path) as f: + context.append(f"## Custom Context\n{f.read()}") + + return "\n".join(context) + +# In main(): +additional = get_additional_context() +message = f"{existing_message}\n\n{additional}" +``` + +### Example: Add Agent Validation + +Edit `.claude/hooks/inject-subagent-context.py`: + +```python +def validate_agent_input(subagent_type, prompt): + """Validate agent invocation.""" + if subagent_type == "implement": + if "git commit" in prompt.lower(): + return False, "Implement agent cannot commit" + return True, None + +# In main(): +valid, error = validate_agent_input(subagent_type, prompt) +if not valid: + output = {"result": "block", "message": error} + print(json.dumps(output)) + return +``` + +### Example: Add Verify Command + +Edit `.claude/hooks/ralph-loop.py`: + +```python +# Add to verify commands list +ADDITIONAL_COMMANDS = ["pnpm test:unit"] + +def get_verify_commands(): + commands = read_worktree_yaml_verify() + commands.extend(ADDITIONAL_COMMANDS) + return commands +``` + +--- + +## Step 3: Modify Settings (If Needed) + +Edit `.claude/settings.json`: + +### Change Timeout + +```json +{ + "hooks": { + "PreToolUse": [ + { + "matcher": "Task", + "hooks": [ + { + "type": "command", + "command": "python3 ...", + "timeout": 60 // Increase from 30 + } + ] + } + ] + } +} +``` + +### Change Matcher + +```json +{ + "hooks": { + "SubagentStop": [ + { + "matcher": "check|my-agent", // Add new agent + "hooks": [...] + } + ] + } +} +``` + +--- + +## Step 4: Document in trellis-local + +Update `.claude/skills/trellis-local/SKILL.md`: + +```markdown +## Hooks Changed + +#### session-start.py + +- **Hook Event**: SessionStart +- **Change**: Added custom context injection +- **Lines modified**: 45-60 +- **Date**: 2026-01-31 +- **Reason**: Need to inject project-specific context + +#### inject-subagent-context.py + +- **Hook Event**: PreToolUse:Task +- **Change**: Added validation for implement agent +- **Lines modified**: 120-135 +- **Date**: 2026-01-31 +- **Reason**: Prevent accidental git commits +``` + +--- + +## Testing + +### Manual Test + +```bash +# Test session-start +python3 .claude/hooks/session-start.py + +# Test inject-subagent-context +echo '{"tool_input":{"subagent_type":"implement","prompt":"test"}}' | \ + python3 .claude/hooks/inject-subagent-context.py + +# Test ralph-loop +echo '{"subagent_type":"check","output":"test"}' | \ + python3 .claude/hooks/ralph-loop.py +``` + +### Integration Test + +1. Start new Claude Code session +2. Verify session-start output +3. Invoke subagent +4. Verify context injection +5. Verify Ralph Loop (for check agent) + +--- + +## Common Modifications + +### Add File to Session Context + +```python +# session-start.py +files_to_inject = [ + ".trellis/workflow.md", + ".trellis/custom-context.md", # Add this +] +``` + +### Skip Injection for Certain Agents + +```python +# inject-subagent-context.py +SKIP_INJECTION = ["research"] + +if subagent_type in SKIP_INJECTION: + print(json.dumps({"result": "continue"})) + return +``` + +### Add Custom Verification + +```python +# ralph-loop.py +def custom_check(): + """Custom verification logic.""" + # Check something + return True, None + +# In verify(): +ok, error = custom_check() +if not ok: + return False, error +``` + +--- + +## Checklist + +- [ ] Hook logic modified +- [ ] Settings updated (if needed) +- [ ] Manual test passed +- [ ] Integration test passed +- [ ] Documented in trellis-local diff --git a/Skills/new-skills/trellis-meta/references/how-to-modify/overview.md b/Skills/new-skills/trellis-meta/references/how-to-modify/overview.md new file mode 100644 index 0000000..49ff53c --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/how-to-modify/overview.md @@ -0,0 +1,227 @@ +# How-To Modification Guide + +Common Trellis customization scenarios and what files need to be modified. + +--- + +## Quick Reference + +| Task | Files to Modify | Platform | +| ------------------------------------------------- | ------------------------------------ | -------- | +| [Add slash command](#add-slash-command) | commands/, trellis-local | All | +| [Add agent](#add-agent) | agents/, hook, jsonl, trellis-local | CC | +| [Modify hook](#modify-hook) | hooks/, settings.json, trellis-local | CC | +| [Add spec category](#add-spec-category) | spec/, jsonl, trellis-local | All | +| [Change verify commands](#change-verify-commands) | worktree.yaml | CC | +| [Add workflow phase](#add-workflow-phase) | task.json, dispatch, trellis-local | CC | +| [Add post_create step](#add-post_create-step) | worktree.yaml | CC | +| [Modify session start](#modify-session-start) | session-start.py, trellis-local | CC | +| [Add core script](#add-core-script) | scripts/, trellis-local | All | +| [Change task types](#change-task-types) | task.py, jsonl templates | All | + +**Platform**: `All` = All 9 platforms | `CC` = Claude Code + iFlow (hook-capable) + +--- + +## Detailed Guides + +### Add Slash Command + +**Scenario**: Add a new `/trellis:my-command` command. + +**Files to modify**: + +``` +.claude/commands/trellis/my-command.md # Create: Command prompt +.cursor/commands/my-command.md # Create: Mirror for Cursor (optional) +.trellis-local/SKILL.md # Update: Document the change +``` + +**Steps**: + +1. Create command file with YAML frontmatter +2. Mirror to Cursor if needed +3. Document in trellis-local + +→ See `add-command.md` for details. + +--- + +### Add Agent + +**Scenario**: Add a new agent type like `my-agent`. + +**Files to modify**: + +``` +.claude/agents/my-agent.md # Create: Agent definition +.claude/hooks/inject-subagent-context.py # Modify: Add agent handling +.trellis/tasks/{template}/my-agent.jsonl # Create: Context template +.trellis-local/SKILL.md # Update: Document the change +``` + +**Optional**: + +``` +.claude/agents/dispatch.md # Modify: If adding to pipeline +task.json template # Modify: Add to next_action +``` + +→ See `add-agent.md` for details. + +--- + +### Modify Hook + +**Scenario**: Change hook behavior (context injection, validation, etc.). + +**Files to modify**: + +``` +.claude/hooks/{hook-name}.py # Modify: Hook logic +.claude/settings.json # Modify: If changing matcher/timeout +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `modify-hook.md` for details. + +--- + +### Add Spec Category + +**Scenario**: Add a new spec category like `mobile/`. + +**Files to modify**: + +``` +.trellis/spec/mobile/index.md # Create: Category index +.trellis/spec/mobile/*.md # Create: Spec files +.trellis/tasks/{template}/*.jsonl # Update: Reference new specs +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `add-spec.md` for details. + +--- + +### Change Verify Commands + +**Scenario**: Add or modify Ralph Loop verification commands. + +**Files to modify**: + +``` +.trellis/worktree.yaml # Modify: verify section +``` + +**Example**: + +```yaml +verify: + - pnpm lint + - pnpm typecheck + - pnpm test # Add this +``` + +→ See `change-verify.md` for details. + +--- + +### Add Workflow Phase + +**Scenario**: Add a new phase to the task workflow. + +**Files to modify**: + +``` +task.json (in task directories) # Modify: next_action array +.claude/agents/dispatch.md # Modify: Handle new phase +.claude/agents/{new-phase}.md # Create: If new agent needed +.claude/hooks/inject-subagent-context.py # Modify: If new agent +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `add-phase.md` for details. + +--- + +### Add post_create Step + +**Scenario**: Add setup steps after worktree creation. + +**Files to modify**: + +``` +.trellis/worktree.yaml # Modify: post_create section +``` + +**Example**: + +```yaml +post_create: + - pnpm install + - pnpm db:migrate # Add this +``` + +--- + +### Modify Session Start + +**Scenario**: Change what context is injected at session start. + +**Files to modify**: + +``` +.claude/hooks/session-start.py # Modify: Injection logic +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `modify-session-start.md` for details. + +--- + +### Add Core Script + +**Scenario**: Add a new automation script. + +**Files to modify**: + +``` +.trellis/scripts/my-script.py # Create: Script +.trellis/scripts/common/*.py # Create/Modify: If shared utilities +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `add-script.md` for details. + +--- + +### Change Task Types + +**Scenario**: Add or modify task dev_type (frontend, backend, etc.). + +**Files to modify**: + +``` +.trellis/scripts/task.py # Modify: init-context logic +.trellis/tasks/{template}/*.jsonl # Create: New JSONL templates +.trellis-local/SKILL.md # Update: Document the change +``` + +→ See `change-task-types.md` for details. + +--- + +## Documents in This Directory + +| Document | Scenario | +| ------------------------- | -------------------------------- | +| `add-command.md` | Adding slash commands | +| `add-agent.md` | Adding new agent types | +| `modify-hook.md` | Modifying hook behavior | +| `add-spec.md` | Adding spec categories | +| `change-verify.md` | Changing verify commands | +| `add-phase.md` | Adding workflow phases | +| `modify-session-start.md` | Changing session start injection | +| `add-script.md` | Adding automation scripts | +| `change-task-types.md` | Adding task types | diff --git a/Skills/new-skills/trellis-meta/references/meta/platform-compatibility.md b/Skills/new-skills/trellis-meta/references/meta/platform-compatibility.md new file mode 100644 index 0000000..25df7f4 --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/meta/platform-compatibility.md @@ -0,0 +1,185 @@ +# Platform Compatibility Reference + +Detailed guide on Trellis feature availability across 9 AI coding platforms. + +--- + +## Overview + +Trellis v0.3.0 supports **9 platforms**. The key differentiator is **hook support** — Claude Code and iFlow have Python hook systems that enable automatic context injection and quality enforcement. Other platforms use commands/skills with manual context loading. + +| Platform | Config Directory | CLI Flag | Hooks | Command Format | +| ----------- | ----------------------------- | --------------- | ----- | -------------- | +| Claude Code | `.claude/` | (default) | ✅ | Markdown | +| iFlow | `.iflow/` | `--iflow` | ✅ | Markdown | +| Cursor | `.cursor/` | `--cursor` | ❌ | Markdown | +| OpenCode | `.opencode/` | `--opencode` | ❌ | Markdown | +| Codex | `.agents/skills/` | `--codex` | ❌ | Skills | +| Kilo | `.kilocode/commands/trellis/` | `--kilo` | ❌ | Markdown | +| Kiro | `.kiro/skills/` | `--kiro` | ❌ | Skills | +| Gemini CLI | `.gemini/commands/trellis/` | `--gemini` | ❌ | TOML | +| Antigravity | `.agent/workflows/` | `--antigravity` | ❌ | Markdown | + +--- + +## Platform Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ TRELLIS FEATURE LAYERS │ +├─────────────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌────────────────────────────────────────────────────────────────────┐ │ +│ │ LAYER 3: AUTOMATION │ │ +│ │ Hooks, Ralph Loop, Auto-injection, Multi-Session │ │ +│ │ ─────────────────────────────────────────────────────────────────│ │ +│ │ Platform: Claude Code + iFlow │ │ +│ └────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ┌────────────────────────────────▼───────────────────────────────────┐ │ +│ │ LAYER 2: AGENTS │ │ +│ │ Agent definitions, Task tool, Subagent invocation │ │ +│ │ ─────────────────────────────────────────────────────────────────│ │ +│ │ Platform: Claude Code + iFlow (full), others (manual) │ │ +│ └────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ┌────────────────────────────────▼───────────────────────────────────┐ │ +│ │ LAYER 1: PERSISTENCE │ │ +│ │ Workspace, Tasks, Specs, Commands/Skills, JSONL files │ │ +│ │ ─────────────────────────────────────────────────────────────────│ │ +│ │ Platform: ALL 9 (file-based, portable) │ │ +│ └────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## Detailed Feature Breakdown + +### Layer 1: Persistence (All 9 Platforms) + +These features work on all platforms because they're file-based. + +| Feature | Location | Description | +| ------------------ | ------------------------ | ----------------------------------------- | +| Workspace system | `.trellis/workspace/` | Journals, session history | +| Task system | `.trellis/tasks/` | Task tracking, requirements | +| Spec system | `.trellis/spec/` | Coding guidelines | +| Commands/Skills | Platform-specific dirs | Command prompts in each platform's format | +| JSONL context | `*.jsonl` in task dirs | Context file lists | +| Developer identity | `.trellis/.developer` | Who is working | +| Current task | `.trellis/.current-task` | Active task pointer | + +### Layer 2: Agents (Claude Code + iFlow Full, Others Manual) + +| Feature | Claude Code / iFlow | Other Platforms | +| ------------------ | ------------------------------ | ------------------------- | +| Agent definitions | Auto-loaded via `--agent` flag | Read agent files manually | +| Task tool | Full subagent support | No Task tool | +| Context injection | Automatic via hooks | Manual copy-paste | +| Agent restrictions | Enforced by definition | Honor code only | + +### Layer 3: Automation (Claude Code + iFlow Only) + +| Feature | Dependency | Why Hook-Platforms Only | +| ---------------------- | ------------------ | -------------------------------- | +| SessionStart hook | `settings.json` | Hook system for lifecycle events | +| PreToolUse hook | Hook system | Intercepts tool calls | +| SubagentStop hook | Hook system | Controls agent lifecycle | +| Auto context injection | PreToolUse:Task | Hooks inject JSONL content | +| Ralph Loop | SubagentStop:check | Blocks agent until verify passes | +| Multi-Session | CLI + hooks | Session resume, worktree scripts | + +**No workaround**: These features fundamentally require a hook system. + +--- + +## Platform Usage Guides + +### Claude Code + iFlow (Full Support) + +All features work automatically. Hooks provide context injection and quality enforcement. + +```bash +# Initialize +trellis init -u your-name # Claude Code (default) +trellis init --iflow -u your-name # iFlow +``` + +### Cursor + +```bash +trellis init --cursor -u your-name +``` + +- **Works**: Workspace, tasks, specs, commands (read via `.cursor/commands/trellis-*.md`) +- **Doesn't work**: Hooks, auto-injection, Ralph Loop, Multi-Session +- **Workaround**: Manually read spec files at session start + +### OpenCode + +```bash +trellis init --opencode -u your-name +``` + +- **Works**: Workspace, tasks, specs, agents, commands +- **Note**: Full subagent context injection requires [oh-my-opencode](https://github.com/nicepkg/oh-my-opencode). Without it, agents use Self-Loading fallback. + +### Codex + +```bash +trellis init --codex -u your-name +``` + +- Commands mapped to Codex Skills format under `.agents/skills/` +- Use `$start`, `$finish-work`, `$brainstorm` etc. to invoke + +### Kilo, Kiro, Gemini CLI, Antigravity + +```bash +trellis init --kilo -u your-name +trellis init --kiro -u your-name +trellis init --gemini -u your-name +trellis init --antigravity -u your-name +``` + +- Each platform uses its native command format +- Core file-based systems work the same across all platforms + +--- + +## Version Compatibility Matrix + +| Trellis Version | Platforms Supported | +| --------------- | ------------------- | +| 0.2.x | Claude Code, Cursor | +| 0.3.0 | All 9 platforms | + +--- + +## Checking Your Platform + +### Claude Code + +```bash +claude --version +cat .claude/settings.json | grep -A 5 '"hooks"' +``` + +### Other Platforms + +```bash +# Check if platform config directory exists +ls -la .cursor/ .opencode/ .iflow/ .agents/ .kilocode/ .kiro/ .gemini/ .agent/ 2>/dev/null +``` + +### Determining Support Level + +``` +Does the platform have hook support? +├── YES (Claude Code, iFlow) → Full Trellis support +└── NO (all others) → Partial support + ├── Can read files → Layer 1 works + └── Has agent system → Layer 2 partial +``` diff --git a/Skills/new-skills/trellis-meta/references/meta/self-iteration-guide.md b/Skills/new-skills/trellis-meta/references/meta/self-iteration-guide.md new file mode 100644 index 0000000..535a17e --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/meta/self-iteration-guide.md @@ -0,0 +1,305 @@ +# Trellis Self-Iteration Guide + +How to maintain skill documentation when customizing Trellis. + +--- + +## Core Principle + +**Every Trellis modification MUST be documented in the appropriate skill.** + +``` +Modification to Trellis → Update trellis-local (project skill) +Update to Trellis itself → Update trellis-meta (meta skill) +``` + +--- + +## Decision Tree + +``` +Is this a modification to Trellis? +│ +├── YES: What kind? +│ │ +│ ├── Project-specific customization +│ │ └── Update .claude/skills/trellis-local/SKILL.md +│ │ +│ ├── Bug fix to core Trellis +│ │ └── Update ~/.claude/skills/trellis-meta/ +│ │ (or project copy if reviewing first) +│ │ +│ └── New feature to core Trellis +│ └── Update trellis-meta after release +│ +└── NO: Just using Trellis + └── No skill update needed +``` + +--- + +## Self-Iteration Workflow + +### Step 1: Before Making Changes + +```bash +# Check if project skill exists +ls .claude/skills/trellis-local/SKILL.md + +# If not, create it from template +mkdir -p .claude/skills/trellis-local +# Copy template from trellis-meta/references/trellis-local-template.md +``` + +### Step 2: Make the Trellis Modification + +Do your work: add command, modify hook, etc. + +### Step 3: Document in Project Skill + +Open `.claude/skills/trellis-local/SKILL.md` and: + +1. **Find the right section** (Commands/Agents/Hooks/Specs/Workflow) +2. **Add entry using template** +3. **Update changelog** +4. **Update summary counts** + +### Step 4: Verify Documentation + +Ask yourself: + +- [ ] Would another AI understand what was changed? +- [ ] Is the "why" documented? +- [ ] Are affected files listed? +- [ ] Is the date recorded? + +--- + +## Documentation Templates + +### New Command + +```markdown +#### /trellis:my-command + +- **File**: `.claude/commands/trellis/my-command.md` +- **Purpose**: Brief description of what it does +- **Added**: 2026-01-31 +- **Reason**: Why this command was needed + +**Usage**: +``` + +/trellis:my-command [args] + +``` + +**Example**: +User asks "..." → Command does "..." +``` + +### New Agent + +```markdown +#### my-agent + +- **File**: `.claude/agents/my-agent.md` +- **Purpose**: What this agent specializes in +- **Tools**: Read, Write, Edit, Bash, Glob, Grep +- **Model**: opus +- **Added**: 2026-01-31 +- **Reason**: Why this agent was needed + +**Hook Integration**: + +- Added to `inject-subagent-context.py` at line X +- Uses `my-agent.jsonl` for context + +**Invocation**: +``` + +Task(subagent_type="my-agent", prompt="...") + +``` + +``` + +### Hook Modification + +````markdown +#### inject-subagent-context.py + +- **Hook Event**: PreToolUse:Task +- **Change**: Added handling for `my-agent` subagent type +- **Lines Modified**: 45-67, 120-135 +- **Date**: 2026-01-31 +- **Reason**: Support new agent type + +**Code Changes**: + +```python +# Added constant +AGENT_MY_AGENT = "my-agent" + +# Added to agent list +AGENTS_ALL = (..., AGENT_MY_AGENT) + +# Added context function +def get_my_agent_context(repo_root, task_dir): + ... +``` +```` + +```` + +### Spec Category Addition + +```markdown +#### security/ +- **Path**: `.trellis/spec/security/` +- **Purpose**: Security guidelines for the project +- **Files**: + - `index.md` - Category overview + - `auth-guidelines.md` - Authentication patterns + - `input-validation.md` - Validation requirements +- **Added**: 2026-01-31 +- **Reason**: Project requires security-focused development + +**JSONL Integration**: +```jsonl +{"file": ".trellis/spec/security/index.md", "reason": "Security guidelines"} +```` + +```` + +### Workflow Change + +```markdown +#### Custom Phase Order +- **What**: Changed default task phases to include research phase +- **Files Affected**: + - `.trellis/scripts/task.py` (init-context function) + - Default task.json template +- **Date**: 2026-01-31 +- **Reason**: All tasks in this project need research first + +**New Default next_action**: +```json +[ + {"phase": 1, "action": "research"}, + {"phase": 2, "action": "implement"}, + {"phase": 3, "action": "check"}, + {"phase": 4, "action": "finish"}, + {"phase": 5, "action": "create-pr"} +] +```` + +```` + +--- + +## Changelog Format + +```markdown +### 2026-01-31 - Feature: Custom Research Phase +- Added research phase as default first phase +- Modified task.py init-context +- Updated task.json template +- Reason: Project complexity requires upfront research + +### 2026-01-30 - Bugfix: Hook Timeout +- Increased ralph-loop.py timeout from 10s to 30s +- Reason: Complex verification commands were timing out + +### 2026-01-29 - Initial Setup +- Initialized trellis-local skill +- Base Trellis version: 0.3.0 +```` + +--- + +## Multi-Project Scenario + +When working with multiple Trellis projects: + +``` +~/projects/ +├── project-a/ +│ └── .claude/skills/trellis-local/ # Project A customizations +├── project-b/ +│ └── .claude/skills/trellis-local/ # Project B customizations +└── project-c/ + └── .claude/skills/trellis-local/ # Project C customizations + +~/.claude/skills/ +└── trellis-meta/ # Shared meta-skill (vanilla Trellis) +``` + +**Each project has its own `trellis-local`** documenting that project's specific customizations. + +**The meta-skill is shared** and documents vanilla Trellis. + +--- + +## Upgrade Workflow + +When upgrading Trellis to a new version: + +### 1. Review New Version Changes + +```bash +# Compare new meta-skill with current +diff -r ~/.claude/skills/trellis-meta/references/ \ + ./new-trellis-meta/references/ +``` + +### 2. Check for Conflicts + +Review each customization in `trellis-local`: + +- Does new version include this feature natively? +- Does new version break this customization? +- Can this customization be simplified? + +### 3. Merge Carefully + +```bash +# Backup current meta-skill +cp -r ~/.claude/skills/trellis-meta ~/.claude/skills/trellis-meta.backup + +# Update meta-skill +cp -r ./new-trellis-meta/* ~/.claude/skills/trellis-meta/ +``` + +### 4. Update Project Skills + +Add migration note to `trellis-local`: + +```markdown +### 2026-02-01 - Upgraded to Trellis 0.4.0 + +- Updated meta-skill to 0.4.0 +- Kept custom `security-agent` (not in vanilla) +- Migrated `my-command` to new command format +- Removed `old-hook` customization (now in vanilla) +``` + +--- + +## AI Instructions + +When an AI modifies Trellis, it MUST: + +1. **Check** if `trellis-local` exists in the project +2. **Create** it from template if missing +3. **Document** the change immediately after making it +4. **Update** the changelog with date and description +5. **Verify** the documentation is complete + +**Never** modify `trellis-meta` for project-specific changes. + +**Always** tell the user what was documented. + +Example AI response: + +> "I've added the `/trellis:deploy` command and documented it in `.claude/skills/trellis-local/SKILL.md` under the Commands section." diff --git a/Skills/new-skills/trellis-meta/references/meta/trellis-local-template.md b/Skills/new-skills/trellis-meta/references/meta/trellis-local-template.md new file mode 100644 index 0000000..8fd219a --- /dev/null +++ b/Skills/new-skills/trellis-meta/references/meta/trellis-local-template.md @@ -0,0 +1,309 @@ +# Trellis Local Skill Template + +Copy this template to create a project-specific `trellis-local` skill. + +--- + +## How to Use + +1. Create directory: `mkdir -p .claude/skills/trellis-local` +2. Copy the template below to `.claude/skills/trellis-local/SKILL.md` +3. Replace `[PROJECT_NAME]` with your project name +4. Update version info + +--- + +## Template + +````markdown +--- +name: trellis-local +description: | + Project-specific Trellis customizations for [PROJECT_NAME]. + This skill documents all modifications made to the vanilla Trellis system. + Inherits from trellis-meta for base architecture documentation. + Use this skill to understand what's been customized in this project's Trellis setup. +--- + +# Trellis Local - [PROJECT_NAME] + +## Overview + +This skill documents all customizations made to Trellis in this project. For vanilla Trellis documentation, see the `trellis-meta` skill. + +## Base Information + +| Field | Value | +| ---------------- | ---------- | +| Trellis Version | X.X.X | +| Date Initialized | YYYY-MM-DD | +| Last Updated | YYYY-MM-DD | + +--- + +## Customizations Summary + +Quick reference of what's been modified: + +- **Commands**: X added, Y modified +- **Agents**: X added, Y modified +- **Hooks**: X modified +- **Specs**: X categories added +- **Workflow**: [summary of changes] + +--- + +## Commands + +### Added Commands + + + +(none yet) + +### Modified Commands + + + +(none yet) + +--- + +## Agents + +> **Note**: Agent auto-loading is [CC] (Claude Code only). On Cursor, agents are read manually. + +### Added Agents + + + +(none yet) + +### Modified Agents + + + +(none yet) + +--- + +## Hooks [CC] + +> **Claude Code Only**: Hooks require Claude Code's hook system. Not available on Cursor. + +### Modified Hooks + + + +(none yet) + +--- + +## Specs + +### Added Categories + + + +(none yet) + +### Modified Specs + + + +(none yet) + +--- + +## Workflow Changes + +### Task Configuration + + + +(none yet) + +### JSONL Templates + + + +(none yet) + +--- + +## worktree.yaml Customizations + +```yaml +# Document any changes to worktree.yaml here +``` + +(using defaults) + +--- + +## Changelog + +Record all changes chronologically. + +### YYYY-MM-DD - Initial Setup + +- Initialized trellis-local skill +- Base Trellis version: X.X.X + + + +--- + +## Migration Notes + +Document any special steps needed when upgrading Trellis. + + + +(none yet) + +--- + +## Known Issues + +Track any issues with customizations. + + + +(none yet) + +```` + +--- + +## Automation Script + +To auto-create the skill, run: + +```bash +#!/bin/bash +# create-trellis-local.sh + +PROJECT_NAME="${1:-$(basename $(pwd))}" +SKILL_DIR=".claude/skills/trellis-local" + +mkdir -p "$SKILL_DIR" + +cat > "$SKILL_DIR/SKILL.md" << 'EOF' +--- +name: trellis-local +description: | + Project-specific Trellis customizations for PROJECT_NAME_PLACEHOLDER. + This skill documents all modifications made to the vanilla Trellis system. + Inherits from trellis-meta for base architecture documentation. +--- + +# Trellis Local - PROJECT_NAME_PLACEHOLDER + +## Base Information + +| Field | Value | +|-------|-------| +| Trellis Version | $(cat package.json 2>/dev/null | grep version | head -1 | cut -d'"' -f4 || echo "unknown") | +| Date Initialized | $(date +%Y-%m-%d) | +| Last Updated | $(date +%Y-%m-%d) | + +## Customizations + +(none yet - document changes as you make them) + +## Changelog + +### $(date +%Y-%m-%d) - Initial Setup +- Initialized trellis-local skill +EOF + +sed -i '' "s/PROJECT_NAME_PLACEHOLDER/$PROJECT_NAME/g" "$SKILL_DIR/SKILL.md" + +echo "Created $SKILL_DIR/SKILL.md for project: $PROJECT_NAME" +```` diff --git a/Skills/resource-decision-guide/SKILL.md b/Skills/resource-decision-guide/SKILL.md new file mode 100644 index 0000000..9a336dc --- /dev/null +++ b/Skills/resource-decision-guide/SKILL.md @@ -0,0 +1,189 @@ +--- +name: SIG-IGGA Resource Decision Guide +description: Guía maestra que mapea cada tipo de tarea a los recursos óptimos. Enseña al AI a usar el recurso correcto para cada situación, evitando sobre-ingeniería en tareas simples. +--- + +# 🧭 Guía de Decisión de Recursos — SIG-IGGA-AVISOS + +> **Regla de Oro**: No usar herramientas especializadas para tareas simples. +> Un cambio de color en CSS NO necesita un MCP de PostgreSQL. + +--- + +## 🔀 Árbol de Decisión Rápido + +``` +¿Qué tipo de tarea es? +│ +├─ 🐛 BUG / ERROR EN PRODUCCIÓN +│ ├─ ¿Error 500 en la API? → MCP Sentry + MCP PostgreSQL +│ ├─ ¿Error CORS? → Workflow /debug-cors + MCP Fetch +│ ├─ ¿Error de mapa/GIS? → Skill sig-igga-gis + MCP Filesystem +│ └─ ¿Error de datos/schema? → MCP PostgreSQL directo +│ +├─ 🚀 DEPLOY / DESPLIEGUE +│ ├─ ¿Deploy frontend? → Workflow /deploy-frontend +│ ├─ ¿Deploy completo? → Skill sig-igga-deploy +│ └─ ¿Verificar producción? → Workflow /health-check +│ +├─ 📊 REPORTES / DATOS +│ ├─ ¿Reporte Excel GEAM? → Skill sig-igga-export +│ ├─ ¿Consulta de datos? → MCP PostgreSQL +│ └─ ¿Estadísticas de avisos? → MCP PostgreSQL + MCP Fetch (API) +│ +├─ 🗺️ GIS / GEOESPACIAL +│ ├─ ¿Validar GeoJSON? → Skill sig-igga-gis +│ ├─ ¿Verificar coordenadas? → Skill sig-igga-gis + MCP PostgreSQL +│ ├─ ¿Importar capa? → MCP Filesystem + Skill sig-igga-gis +│ └─ ¿Problema con la proyección? → Skill sig-igga-gis +│ +├─ 🏗️ DESARROLLO / NUEVA FUNCIONALIDAD +│ ├─ ¿Tarea simple (CSS, fix menor)? → NINGÚN recurso extra (baseline) +│ ├─ ¿Feature compleja multi-módulo? → Skill antigravity-skill-orchestrator +│ ├─ ¿Nuevo componente visual? → Skill antigravity-design-expert +│ ├─ ¿Planificar refactorización? → MCP Sequential Thinking +│ └─ ¿Buscar skill específica? → MCP SkillsMP +│ +├─ 🧪 TESTING +│ ├─ ¿Test de API? → Skill sig-igga-testing + MCP Fetch +│ ├─ ¿Test de frontend? → Skill sig-igga-testing +│ └─ ¿Health check rápido? → Workflow /health-check +│ +├─ 📝 GESTIÓN DE CÓDIGO +│ ├─ ¿Ver diff/historial? → MCP Git +│ ├─ ¿Crear PR/Issue? → MCP GitHub +│ ├─ ¿Ver errores en prod? → MCP Sentry +│ └─ ¿Leer archivos del proyecto? → MCP Filesystem +│ +├─ 🔍 INVESTIGACIÓN / ANÁLISIS +│ ├─ ¿Analizar sesiones pasadas? → Skill analyze-project +│ ├─ ¿Buscar skills nuevas? → MCP SkillsMP +│ ├─ ¿Leer documentación web? → MCP Fetch +│ └─ ¿Planificación compleja? → MCP Sequential Thinking +│ +└─ 🔧 MANTENIMIENTO + ├─ ¿Levantar dev local? → Workflow /dev-start + ├─ ¿Verificar estado general? → Workflow /health-check + └─ ¿CI/CD? → GitHub Actions ci.yml +``` + +--- + +## 📋 Inventario Completo de Recursos + +### Nivel 1: MCPs (9 servidores — acceso directo) + +| MCP | Cuándo SÍ usar | Cuándo NO usar | +|---|---|---| +| 🐙 **GitHub** | Issues, PRs, releases, ver código remoto | Cambios locales (usar Git MCP o edición directa) | +| 🗄️ **PostgreSQL** | Diagnosticar datos, validar schema, queries rápidas | Cambios de schema (hacer manual con SQL seguro) | +| 🌐 **Fetch** | Probar API, verificar CORS, leer docs de OpenLayers | Páginas con autenticación | +| 🧠 **Memory** | Recordar decisiones, patrones de skills exitosos | Datos efímeros o temporales | +| 🧩 **Sequential Thinking** | Planificar migraciones, refactorizaciones grandes | Tareas simples y directas | +| 📂 **Git** | Commits, branches, ver historial, diffs | Operaciones destructivas sin confirmación | +| 📁 **Filesystem** | Leer GeoJSON, configs, logs, archivos de capas | Escribir archivos grandes (mejor herramientas nativas) | +| 🔍 **Sentry** | Ver errores de prod, analizar stack traces, monitorear | Errores de desarrollo local | +| 🎯 **SkillsMP** | Buscar skills nuevas, instalar capacidades dinámicas | Tareas cubiertas por skills existentes | + +### Nivel 2: Skills Personalizadas (4 — dominio SIG-IGGA) + +| Skill | Activar cuando... | NO activar cuando... | +|---|---|---| +| 📊 **sig-igga-export** | "Generar reporte", "exportar Excel", "GEAM" | Consultas simples de datos | +| 🧪 **sig-igga-testing** | "Testear", "verificar endpoints", "smoke test" | Un solo curl rápido | +| 🗺️ **sig-igga-gis** | "GeoJSON", "coordenadas", "KML", "PostGIS", "capa" | Datos sin componente geográfico | +| 🚀 **sig-igga-deploy** | "Deploy", "desplegar", "subir a producción" | Cambios locales sin push | + +### Nivel 3: Skills de Antigravity (8 — meta/utilidades) + +| Skill | Propósito | Cuándo activar | +|---|---|---| +| 🎯 **antigravity-skill-orchestrator** | Elegir la combinación correcta de skills | Tareas complejas multi-módulo donde no sabes qué skills necesitas | +| 🔍 **analyze-project** | Análisis forense de sesiones de trabajo | Al final de un sprint o cuando hay muchos problemas recurrentes | +| 🎨 **antigravity-design-expert** | UI premium: glassmorphism, animaciones, 3D CSS | Cuando se necesita un componente visual wow | +| ⚙️ **antigravity-manager** | Guía del proyecto AntigravityManager | Solo si se trabaja en el propio Antigravity Manager | +| 💰 **antigravity-balance** | Verificar cuota de tokens de Antigravity | Cuando se sospecha que la cuota está baja | +| 🔄 **antigravity-rotator** | Rotación automática de cuentas | Gestión de múltiples cuentas (no aplica para SIG-IGGA) | +| 📋 **antigravity-workflows** | Orquestar workflows multi-fase | SaaS MVP, Security Audit, AI Agent, Browser QA | +| 🏗️ **trellis-meta** | Framework de desarrollo estructurado con specs | Proyectos grandes con múltiples desarrolladores | + +### Nivel 4: Workflows (4 — atajos rápidos) + +| Workflow | Trigger | Tiempo | +|---|---|---| +| `/health-check` | "¿Está todo bien?", "verificar producción" | ~30 seg | +| `/debug-cors` | "Error CORS", "Access-Control", "blocked" | ~1 min | +| `/dev-start` | "Levantar local", "npm run dev" | ~2 min | +| `/deploy-frontend` | "Deploy frontend", "subir a Cloudflare" | ~3 min | + +### Nivel 5: CI/CD (GitHub Actions) + +| Pipeline | Trigger | Qué hace | +|---|---|---| +| `ci.yml` | Push/PR a main | Lint Python + TypeScript + health check | +| `etl_cron.yml` | Viernes 23:00 UTC | ETL SharePoint → Railway | + +--- + +## ⚡ Combinaciones Óptimas por Escenario + +### Escenario: "La app no carga en producción" + +``` +1. /health-check → ¿Backend/Frontend vivos? +2. MCP Sentry → ¿Hay errores nuevos? +3. MCP Fetch (CORS test) → ¿CORS bloqueando? +4. MCP PostgreSQL → ¿DB conectada? +5. Skill sig-igga-testing → Smoke test completo +``` + +### Escenario: "Agregar nueva capa GIS al mapa" + +``` +1. MCP Filesystem → Leer el archivo GeoJSON +2. Skill sig-igga-gis → Validar geometrías y proyección +3. MCP PostgreSQL → Verificar schema en DB +4. Edición directa → Agregar componente en frontend +5. /deploy-frontend → Subir cambios +``` + +### Escenario: "Generar reporte mensual para ISA" + +``` +1. MCP PostgreSQL → Extraer datos de avisos +2. Skill sig-igga-export → Generar Excel formateado +3. MCP Git → Commit del reporte +``` + +### Escenario: "Rediseñar el dashboard de analytics" + +``` +1. MCP Sequential Thinking → Planificar componentes +2. Skill antigravity-design-expert → Diseño premium glassmorphism +3. Edición directa → Implementar componentes React +4. Skill sig-igga-testing → Verificar que todo funciona +5. /deploy-frontend → Deploy +``` + +### Escenario: "Debuggear un error 500" + +``` +1. MCP Sentry → Ver stack trace exacto +2. MCP PostgreSQL → Verificar estado de datos +3. MCP Fetch → Reproducir el request +4. Edición directa → Aplicar fix +5. /health-check → Verificar fix en producción +``` + +--- + +## 🚫 Anti-Patrones (NO hacer) + +| ❌ No hagas | ✅ Haz en su lugar | +|---|---| +| Usar MCP PostgreSQL para cambiar un color CSS | Editar el archivo directamente | +| Activar skill-orchestrator para renombrar una variable | Renombrar directamente | +| Usar MCP Fetch para leer un archivo local | Usar MCP Filesystem o view_file | +| Activar analyze-project para un fix rápido | Solo para análisis post-sprint | +| Usar Sequential Thinking para una tarea de 1 paso | Ejecutar directamente | +| Buscar en SkillsMP algo que ya tienes como skill local | Usar la skill local primero | diff --git a/assets/css/index.css b/assets/css/index.css index 0c4f231..08ea9b2 100644 --- a/assets/css/index.css +++ b/assets/css/index.css @@ -4326,3 +4326,1043 @@ select.portal-input { .tlv2-card { padding: 1.2rem 1.3rem; } .tlv2-card::after { display: none; } } + +/* ══════════════════════════════════════════════════════════════ + V2 REDESIGN COMPONENTS — Clean B2B GovTech Aesthetic + ══════════════════════════════════════════════════════════════ */ + +/* --- Section Eyebrow (replaces // LABEL_FORMAT) --- */ +.section-eyebrow-v2 { + display: inline-flex; + align-items: center; + gap: 0.5rem; + color: #00E87A; + font-size: 0.78rem; + font-weight: 700; + letter-spacing: 0.1em; + text-transform: uppercase; + margin-bottom: 0.75rem; + font-family: 'Plus Jakarta Sans', var(--font-main); +} +.section-eyebrow-v2 > span:first-child { + display: inline-block; + width: 20px; + height: 2px; + background: #00E87A; +} + +/* --- Services Grid V2 --- */ +.services-grid-v2 { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); + gap: 1.5rem; + margin-top: 2.5rem; +} + +.service-card-v2 { + background: rgba(255,255,255,0.03); + border: 1px solid rgba(255,255,255,0.07); + border-radius: 14px; + padding: 2rem; + transition: border-color 0.3s, transform 0.3s; + cursor: default; +} +.service-card-v2:hover { + border-color: rgba(0,232,122,0.25); + transform: translateY(-4px); +} + +.svc-icon { + width: 48px; + height: 48px; + background: rgba(0,232,122,0.08); + border-radius: 12px; + display: flex; + align-items: center; + justify-content: center; + font-size: 1.4rem; + margin-bottom: 1.25rem; +} + +.service-card-v2 h3 { + font-family: 'Syne', var(--font-main); + font-size: 1.05rem; + font-weight: 700; + margin-bottom: 0.5rem; + color: #fff; +} + +.service-card-v2 p { + font-size: 0.88rem; + color: var(--text-secondary); + line-height: 1.65; + margin-bottom: 1rem; + font-family: 'Plus Jakarta Sans', var(--font-main); +} + +.svc-tags { + display: flex; + gap: 0.4rem; + flex-wrap: wrap; +} +.svc-tags span { + background: rgba(0,232,122,0.07); + color: #00E87A; + border-radius: 4px; + padding: 0.2rem 0.55rem; + font-size: 0.7rem; + font-weight: 600; + letter-spacing: 0.03em; +} +.svc-tags span.blue { + background: rgba(14,165,233,0.1); + color: #0EA5E9; +} + +/* --- Fade In Animation --- */ +.fade-in-v2 { + opacity: 0; + transform: translateY(24px); + transition: opacity 0.7s ease, transform 0.7s ease; +} +.fade-in-v2.visible { + opacity: 1; + transform: translateY(0); +} + +/* --- Challenges Grid V2 --- */ +.challenges-grid-v2 { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); + gap: 1.25rem; +} + +.chal-card-v2 { + background: rgba(10, 15, 20, 0.4); + border: 1px solid rgba(255,255,255,0.05); + border-radius: 12px; + padding: 2rem 1.5rem; + transition: all 0.4s cubic-bezier(0.175, 0.885, 0.32, 1.275); + position: relative; + overflow: hidden; + backdrop-filter: blur(5px); +} +.chal-card-v2:hover { + border-color: rgba(0,232,122,0.4); + transform: translateY(-8px); + box-shadow: 0 10px 30px rgba(0,232,122,0.1); +} +.chal-glow-backdrop { + position: absolute; + inset: 0; + background: radial-gradient(circle at top right, rgba(0,232,122,0.1), transparent 70%); + opacity: 0; + transition: opacity 0.4s ease; + pointer-events: none; + z-index: 0; +} +.chal-card-v2:hover .chal-glow-backdrop { + opacity: 1; +} + +.chal-number { + position: relative; + z-index: 2; + font-family: 'Syne', var(--font-heading); + font-size: 2.2rem; + font-weight: 800; + color: rgba(0,232,122,0.15); + margin-bottom: 1rem; + line-height: 1; + display: flex; + align-items: center; + gap: 0.5rem; + transition: color 0.4s ease; +} +.chal-card-v2:hover .chal-number { + color: rgba(0,232,122,0.4); +} + +.chal-card-v2 h4 { + position: relative; + z-index: 2; + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 1.05rem; + font-weight: 600; + color: #fff; + margin-bottom: 1rem; + line-height: 1.4; +} + +.chal-divider { + position: relative; + z-index: 2; + width: 2rem; + height: 2px; + background: rgba(255,255,255,0.1); + margin-bottom: 1rem; + transition: width 0.4s ease, background 0.4s ease; +} +.chal-card-v2:hover .chal-divider { + width: 4rem; + background: rgba(0,232,122,0.5); +} + +.chal-sol-text { + position: relative; + z-index: 2; + font-size: 0.9rem; + color: #00E87A; + font-family: 'Plus Jakarta Sans', var(--font-main); + font-weight: 500; + display: flex; + align-items: flex-start; + gap: 0.5rem; +} + +.chal-arrow-icon { + color: rgba(0,232,122,0.6); + margin-top: 0.1rem; + font-size: 1.1rem; + transition: transform 0.3s ease; +} +.chal-card-v2:hover .chal-arrow-icon { + transform: translateX(4px); +} + +/* --- HERO V2 SAAS SPLIT LAYOUT --- */ +.hero-v2 { + display: grid; + grid-template-columns: 1.1fr 0.9fr; + gap: 4rem; + min-height: 90vh; + align-items: center; + padding: 0 5%; + position: relative; + overflow: hidden; +} +@media (max-width: 968px) { + .hero-v2 { + grid-template-columns: 1fr; + padding-top: 8rem; + gap: 2rem; + text-align: center; + } +} + +.hero-h1 { + font-family: 'Syne', var(--font-heading); + font-size: clamp(2.5rem, 5vw, 4rem); + line-height: 1.1; + letter-spacing: -0.04em; + margin-bottom: 1.5rem; + background: linear-gradient(135deg, #FFF 0%, rgba(255,255,255,0.7) 100%); + -webkit-background-clip: text; + background-clip: text; + -webkit-text-fill-color: transparent; +} +.hero-p { + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 1.15rem; + color: var(--text-secondary); + line-height: 1.7; + margin-bottom: 2.5rem; + max-width: 650px; +} +@media (max-width: 968px) { + .hero-p { margin-left: auto; margin-right: auto; } + .hero-eyebrow-v2 { margin-left: auto; margin-right: auto; } +} + +.hero-cta-group { + display: flex; + gap: 1rem; + margin-bottom: 4rem; +} +@media (max-width: 968px) { + .hero-cta-group { justify-content: center; } +} +.btn-geo-primary-v2 { + background: var(--accent-electric); + color: #000; + padding: 0.8rem 2rem; + border-radius: 5px; + font-weight: 700; + font-family: 'Plus Jakarta Sans', var(--font-main); + text-decoration: none; + transition: transform 0.3s, box-shadow 0.3s; +} +.btn-geo-primary-v2:hover { + transform: translateY(-2px); + box-shadow: 0 10px 25px rgba(0,229,255,0.3); +} +.btn-geo-outline-v2 { + background: transparent; + border: 1px solid rgba(255,255,255,0.2); + color: #fff; + padding: 0.8rem 2rem; + border-radius: 5px; + font-weight: 600; + font-family: 'Plus Jakarta Sans', var(--font-main); + text-decoration: none; + transition: background 0.3s; +} +.btn-geo-outline-v2:hover { + background: rgba(255,255,255,0.05); +} + +.hero-stats-strip-v2 { + display: flex; + gap: 2rem; + align-items: center; + border-top: 1px solid rgba(255,255,255,0.1); + padding-top: 2rem; +} +@media (max-width: 968px) { + .hero-stats-strip-v2 { justify-content: center; flex-wrap: wrap; border-top:none; } +} +.h-stat-v2 { + display: flex; + flex-direction: column; + gap: 0.3rem; +} +.h-stat-v2 .val { + font-family: 'Syne', var(--font-heading); + font-size: 1.5rem; + font-weight: 700; + color: #fff; +} +.h-stat-v2 .lbl { + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 0.8rem; + color: var(--text-tertiary); + text-transform: uppercase; + letter-spacing: 0.05em; +} +.h-stat-sep-v2 { + width: 1px; + height: 30px; + background: rgba(255,255,255,0.1); +} + +/* Hero Orbital Graphic */ +.orbit-container { + position: relative; + width: 100%; + aspect-ratio: 1/1; + max-width: 600px; + margin: 0 auto; + display: flex; + align-items: center; + justify-content: center; +} +.center-core { + width: 80px; + height: 80px; + background: rgba(10,10,15,0.9); + border: 1px solid var(--accent-electric); + border-radius: 50%; + display: flex; + align-items: center; + justify-content: center; + position: relative; + z-index: 10; + box-shadow: 0 0 40px rgba(0,229,255,0.2); + cursor: crosshair; + transition: box-shadow 0.3s ease, border-color 0.3s ease; +} +.center-core:hover { + box-shadow: 0 0 60px rgba(0,229,255,0.5); + border-color: #fff; +} +.center-core img { width: 40px; height: 40px; } + +/* Tooltip Styles */ +.center-core::after, .sat::after { + content: attr(data-tooltip); + position: absolute; + bottom: 120%; + left: 50%; + transform: translateX(-50%) translateY(10px); + background: rgba(10, 15, 20, 0.95); + border: 1px solid rgba(0, 229, 255, 0.4); + color: #fff; + padding: 0.6rem 1rem; + border-radius: 6px; + font-size: 0.8rem; + font-family: var(--font-mono); + white-space: nowrap; + opacity: 0; + pointer-events: none; + transition: all 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275); + box-shadow: 0 10px 30px rgba(0, 0, 0, 0.5), inset 0 0 15px rgba(0,229,255,0.1); + z-index: 100; + backdrop-filter: blur(5px); +} +.center-core:hover::after, .sat:hover::after { + opacity: 1; + transform: translateX(-50%) translateY(0); +} + +.orbit-ring { + position: absolute; + border-radius: 50%; + border: 1px dashed rgba(255,255,255,0.1); + top: 50%; left: 50%; + transform: translate(-50%, -50%); + animation: orbitSpin linear infinite; +} +.orbit-1 { width: 200px; height: 200px; animation-duration: 20s; } +.orbit-2 { width: 350px; height: 350px; animation-duration: 35s; animation-direction: reverse; } +.orbit-3 { width: 480px; height: 480px; animation-duration: 55s; border: 1px dashed rgba(255,255,255,0.05); } + +@keyframes orbitSpin { + 100% { transform: translate(-50%, -50%) rotate(360deg); } +} + +.sat { + position: absolute; + background: rgba(15,15,20,0.9); + border: 1px solid rgba(255,255,255,0.1); + border-radius: 50%; + display: flex; + align-items: center; + justify-content: center; + box-shadow: 0 4px 15px rgba(0,0,0,0.5); + /* Reverse rotation to keep icons upright */ + animation: satCounterSpin linear infinite; + cursor: pointer; + transition: transform 0.3s ease, border-color 0.3s ease, box-shadow 0.3s ease; +} +.sat:hover { + border-color: var(--accent-electric); + box-shadow: 0 0 20px rgba(0,229,255,0.3); + z-index: 20; +} +.sat img { width: 24px; height: 24px; transition: transform 0.3s ease; } +.sat:hover img { transform: scale(1.2); } +@keyframes satCounterSpin { + 100% { transform: rotate(-360deg); } +} +@keyframes satCounterSpinRev { + 100% { transform: rotate(360deg); } +} + +/* Position mapping */ +.orbit-1 .sat { width: 50px; height: 50px; animation-duration: 20s; margin-top: -25px; margin-left: -25px; } +.orbit-1 .sat-1 { top: 0%; left: 50%; } +.orbit-1 .sat-2 { top: 100%; left: 50%; } + +.orbit-2 .sat { width: 60px; height: 60px; animation-duration: 35s; animation-name: satCounterSpinRev; margin-top:-30px; margin-left:-30px; } +.orbit-2 .sat-3 { top: 14%; left: 14%; border-color: #00e5ff; background: rgba(0,229,255,0.05); } +.orbit-2 .sat-4 { top: 86%; left: 86%; } +.orbit-2 .sat-5 { top: 14%; left: 86%; border-color:#00E87A; background:rgba(0,232,122,0.05); } + +.orbit-3 .sat { width: 45px; height: 45px; animation-duration: 55s; margin-top:-22px; margin-left:-22px; background: rgba(20,20,25,0.7); } +.orbit-3 .sat-6 { top: 5%; left: 50%; opacity: 0.8; } +.orbit-3 .sat-7 { top: 75%; left: 93%; opacity: 0.8; } +.orbit-3 .sat-8 { top: 75%; left: 7%; opacity: 0.8; } + + +/* --- ABOUT GRID V2 (Skill Bars) --- */ +.about-grid-v2 { + display: grid; + grid-template-columns: 1fr 1fr; + gap: 5rem; + align-items: center; +} +@media (max-width: 968px) { + .about-grid-v2 { grid-template-columns: 1fr; gap: 3rem; } +} + +.about-highlights { + display: flex; + flex-direction: column; + gap: 1rem; + margin-top: 1.5rem; +} +.a-high { + display: flex; + align-items: center; + gap: 0.75rem; + font-family: 'Plus Jakarta Sans', var(--font-main); + color: #fff; + font-weight: 500; +} + +.skill-group { + margin-bottom: 2.2rem; + position: relative; +} +.skill-head { + display: flex; + justify-content: space-between; + margin-bottom: 0.8rem; + font-family: var(--font-mono); + font-size: 0.85rem; + color: var(--text-secondary); + font-weight: 600; + letter-spacing: 0.05em; + text-transform: uppercase; +} +.skill-head span:last-child { + color: #fff; + font-weight: 700; +} +.skill-bar { + width: 100%; + height: 8px; + background: rgba(255,255,255,0.03); + border-radius: 4px; + border: 1px solid rgba(255,255,255,0.08); + overflow: hidden; + position: relative; + box-shadow: inset 0 2px 4px rgba(0,0,0,0.5); +} +.skill-fill { + height: 100%; + border-radius: 4px; + position: relative; + /* initial width set via html style */ + transition: width 1.5s cubic-bezier(0.175, 0.885, 0.32, 1.275); +} +.skill-fill::after { + content: ''; + position: absolute; + top: 0; left: 0; right: 0; bottom: 0; + background: linear-gradient(90deg, rgba(255,255,255,0) 0%, rgba(255,255,255,0.5) 100%); + opacity: 0.5; +} +.skill-group:hover .skill-fill { + box-shadow: 0 0 15px currentColor; + filter: brightness(1.2); +} + + +/* --- TECH ROWS CONTAINER --- */ +.tech-rows-container { + display: flex; + flex-direction: column; + gap: 1.5rem; +} +.tech-row { + display: grid; + grid-template-columns: 80px 1fr auto; + background: rgba(255,255,255,0.01); + border: 1px solid rgba(255,255,255,0.05); + border-radius: 12px; + padding: 2rem; + align-items: center; + gap: 2rem; + transition: border-color 0.3s, background 0.3s; +} +.tech-row:hover { + border-color: rgba(0,229,255,0.2); + background: rgba(255,255,255,0.02); +} +@media (max-width: 968px) { + .tech-row { grid-template-columns: 1fr; text-align: center; justify-items: center; gap: 1rem;} +} + +.t-row-icon { + width: 60px; + height: 60px; + background: rgba(0,229,255,0.05); + border-radius: 12px; + display: flex; + align-items: center; + justify-content: center; + font-size: 1.8rem; + color: var(--accent-electric); +} +.t-row-info h3 { + font-family: 'Syne', var(--font-heading); + font-size: 1.25rem; + color: #fff; + margin-bottom: 0.5rem; +} +.t-row-info p { + font-family: 'Plus Jakarta Sans', var(--font-main); + color: var(--text-secondary); + font-size: 0.95rem; + line-height: 1.5; +} +.t-row-tags { + display: flex; + gap: 0.5rem; + flex-wrap: wrap; +} +.t-row-tags span { + background: rgba(255,255,255,0.05); + padding: 0.4rem 0.8rem; + border-radius: 5px; + font-size: 0.75rem; + color: var(--text-tertiary); + font-family: 'Plus Jakarta Sans', var(--font-main); + font-weight: 500; +} + + +/* --- METRICS GRID V3 (Cards with Base Accent) --- */ +.metrics-grid-v3 { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(260px, 1fr)); + gap: 2rem; +} +.metric-card-v3 { + background: rgba(10, 15, 20, 0.6); + border: 1px solid rgba(255,255,255,0.05); + border-radius: 12px; + padding: 2.5rem 2rem; + position: relative; + overflow: hidden; + transition: all 0.4s cubic-bezier(0.175, 0.885, 0.32, 1.275); + backdrop-filter: blur(10px); +} +.metric-card-v3::before { + content: ''; + position: absolute; + top: 0; left: 0; right: 0; bottom: 0; + pointer-events: none; + background: radial-gradient(circle at center, rgba(255,255,255,0.05) 0%, transparent 70%); + opacity: 0; + transition: opacity 0.5s ease; +} +.metric-card-v3:hover { + transform: translateY(-8px); + border-color: rgba(255,255,255,0.15); + box-shadow: 0 15px 35px rgba(0,0,0,0.6); +} +.metric-card-v3:hover::before { + opacity: 1; +} +.m-v3-val { + position: relative; + z-index: 2; + font-family: 'Syne', var(--font-heading); + font-size: 3.8rem; + font-weight: 800; + color: #fff; + line-height: 1; + margin-bottom: 1rem; + transition: transform 0.4s ease; +} +.metric-card-v3:hover .m-v3-val { + transform: scale(1.05); +} +.m-v3-label { + position: relative; + z-index: 2; + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 1.15rem; + font-weight: 700; + color: #fff; + margin-bottom: 0.5rem; +} +.m-v3-sub { + position: relative; + z-index: 2; + font-family: var(--font-mono); + font-size: 0.75rem; + color: var(--text-tertiary); + text-transform: uppercase; + letter-spacing: 0.05em; +} +.m-v3-accent { + position: absolute; + bottom: 0; + left: 0; + width: 100%; + height: 4px; +} + +/* --- LAB GRID V2 (Professional Cards) --- */ +.lab-grid-v2 { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(320px, 1fr)); + gap: 3rem; +} + +.lab-card-v2 { + background: rgba(10,10,15,0.7); + border: 1px solid rgba(255,255,255,0.05); + border-radius: 12px; + padding: 2.5rem; + display: flex; + flex-direction: column; + transition: transform 0.3s, border-color 0.3s, box-shadow 0.3s; + position: relative; + overflow: hidden; +} +.lab-card-v2:hover { + transform: translateY(-5px); + border-color: rgba(0,229,255,0.3); + box-shadow: 0 10px 40px rgba(0,229,255,0.1); +} +.lab-card-v2.gold-theme:hover { border-color: rgba(255,180,0,0.4); box-shadow: 0 10px 40px rgba(255,180,0,0.1); } +.lab-card-v2.green-theme:hover { border-color: rgba(0,232,122,0.4); box-shadow: 0 10px 40px rgba(0,232,122,0.1); } + +.lc-head { + display: flex; + justify-content: space-between; + align-items: center; + margin-bottom: 2rem; +} +.lc-id { + font-family: var(--font-mono); + font-size: 0.75rem; + color: var(--text-tertiary); + background: rgba(255,255,255,0.05); + padding: 0.3rem 0.6rem; + border-radius: 4px; + letter-spacing: 0.05em; +} +.lc-icon { + font-size: 2rem; + color: var(--accent-electric); +} +.gold-theme .lc-icon { color: var(--accent-gold); } +.green-theme .lc-icon { color: #00E87A; } + +.lc-title { + font-family: 'Syne', var(--font-heading); + font-size: 1.5rem; + color: #fff; + margin-bottom: 1rem; +} +.lc-desc { + font-family: 'Plus Jakarta Sans', var(--font-main); + color: var(--text-secondary); + font-size: 0.95rem; + line-height: 1.6; + margin-bottom: 2rem; + flex-grow: 1; +} + +.lc-tags { + display: flex; + gap: 0.5rem; + flex-wrap: wrap; + margin-bottom: 2rem; +} +.lc-tags span { + font-family: var(--font-mono); + font-size: 0.7rem; + color: var(--accent-electric); + border: 1px solid rgba(0,229,255,0.3); + padding: 0.3rem 0.8rem; + border-radius: 50px; +} +.gold-theme .lc-tags span { color: var(--accent-gold); border-color: rgba(255,180,0,0.3); } +.green-theme .lc-tags span { color: #00E87A; border-color: rgba(0,232,122,0.3); } + +/* Simplified mockup previews directly inside the new CSS */ +.lc-preview { + background: rgba(0,0,0,0.3); + border-radius: 8px; + height: 120px; + margin-bottom: 2rem; + display: flex; + align-items: center; + justify-content: center; + border: 1px solid rgba(255,255,255,0.02); + position: relative; + overflow: hidden; +} +.lc-btn { + display: inline-flex; + align-items: center; + justify-content: center; + gap: 0.5rem; + width: 100%; + padding: 1rem; + background: rgba(255,255,255,0.03); + border: 1px solid rgba(255,255,255,0.1); + color: #fff; + border-radius: 6px; + font-family: 'Plus Jakarta Sans', var(--font-main); + font-weight: 600; + text-decoration: none; + cursor: pointer; + transition: background 0.3s, border-color 0.3s; +} +.lc-btn:hover { background: rgba(0,229,255,0.1); border-color: var(--accent-electric); } +.gold-theme .lc-btn:hover { background: rgba(255,180,0,0.1); border-color: var(--accent-gold); } +.green-theme .lc-btn:hover { background: rgba(0,232,122,0.1); border-color: #00E87A; } + +/* Mini Map Val Grid & Pipe previews */ +.val-grid { display: grid; grid-template-columns: repeat(3, 1fr); gap: 2px; width: 60%; height: 60%; opacity: 0.5; } +.val-grid div { background: var(--accent-gold); transition: opacity 0.3s; } +.pipe-preview { display:flex; justify-content:space-between; padding: 0 10%; width: 100%; } +.pipe-node { font-family:var(--font-mono); font-size:0.7rem; color:rgba(255,255,255,0.4); display:flex; flex-direction:column; align-items:center; gap:0.5rem; } +.pipe-node i { font-size:1.5rem; } +.pipe-node.active { color:#00E87A; } + +/* --- TIMELINE V3 (SAAS SIDEBAR) --- */ +@media (max-width: 968px) { + .timeline-v3-layout { + grid-template-columns: 1fr !important; + gap: 2rem !important; + } + .timeline-sidebar { position: relative !important; top: 0 !important; } +} + +.timeline-v3-feed { + border-left: 1px solid rgba(255,255,255,0.05); + padding-left: 3rem; +} +@media (max-width: 768px) { + .timeline-v3-feed { padding-left: 1.5rem; } +} + +.tlv3-card { + position: relative; + padding-bottom: 3.5rem; + border-bottom: 1px solid rgba(255,255,255,0.05); + margin-bottom: 3.5rem; +} +.tlv3-card::before { + content: ''; + position: absolute; + left: -3rem; + top: 5px; + width: 12px; + height: 12px; + background: rgba(10,10,15,1); + border: 2px solid var(--accent-electric); + border-radius: 50%; + transform: translateX(-50%); + box-shadow: 0 0 10px rgba(0,229,255,0.3); +} +@media (max-width: 768px) { + .tlv3-card::before { left: -1.5rem; } +} + +.tlv3-period { + font-family: var(--font-mono); + font-size: 0.8rem; + color: var(--accent-electric); + background: rgba(0,229,255,0.1); + display: inline-block; + padding: 0.3rem 0.8rem; + border-radius: 50px; + margin-bottom: 1rem; + font-weight: 600; +} +.tlv3-role { + font-family: 'Syne', var(--font-heading); + font-size: 1.6rem; + color: #fff; + margin-bottom: 0.5rem; +} +.tlv3-company { + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 0.95rem; + color: var(--text-tertiary); + margin-bottom: 1.2rem; + text-transform: uppercase; + letter-spacing: 0.05em; +} +.tlv3-desc { + font-family: 'Plus Jakarta Sans', var(--font-main); + font-size: 1rem; + color: var(--text-secondary); + line-height: 1.6; + margin-bottom: 1.5rem; +} +.tlv3-tags { + display: flex; + flex-wrap: wrap; + gap: 0.5rem; +} +.tlv3-tags span { + font-family: var(--font-mono); + font-size: 0.75rem; + color: var(--text-secondary); + background: rgba(255,255,255,0.03); + padding: 0.3rem 0.6rem; + border-radius: 4px; + border: 1px solid rgba(255,255,255,0.1); +} + +/* --- TECH DNA ROW TAGS V2 --- */ +.t-tech-tag { + display: inline-flex; + align-items: center; + gap: 0.4rem; + padding: 0.4rem 0.7rem; + background: rgba(255, 255, 255, 0.03); + border: 1px solid rgba(255, 255, 255, 0.1); + border-radius: 6px; + font-family: var(--font-mono); + font-size: 0.75rem; + color: var(--text-secondary); + transition: all 0.3s ease; + cursor: default; +} +.t-tech-tag img { + width: 14px; + height: 14px; + filter: drop-shadow(0 0 5px rgba(255,255,255,0.1)); +} +.t-tech-tag i { + font-size: 1rem; +} +.t-row-tags:hover .t-tech-tag { + opacity: 0.4; /* Dim others on container hover */ +} +.t-row-tags .t-tech-tag:hover { + opacity: 1; + transform: translateY(-2px); + background: rgba(255, 255, 255, 0.08); + border-color: rgba(255, 255, 255, 0.3); + color: #fff; + box-shadow: 0 4px 15px rgba(0,0,0,0.3); +} + +/* --- ORBIT INTERACTION --- */ +.orbit-container:hover .orbit-ring, +.orbit-container:hover .sat { + animation-play-state: paused !important; +} +.orbit-container .sat { + transition: transform 0.3s ease, border-color 0.3s ease, box-shadow 0.3s ease, background 0.3s ease; +} +.orbit-container .center-core { + transition: transform 0.3s ease, border-color 0.3s ease, box-shadow 0.3s ease, background 0.3s ease; +} + +/* --- PREMIUM SAAS FOOTER V6 --- */ +.site-footer-v6 { + background: #050508; + border-top: 1px solid rgba(255,255,255,0.05); + padding: 6rem 5% 2rem; + position: relative; + overflow: hidden; +} +.site-footer-v6::before { + content: ''; + position: absolute; + top: 0; left: 0; right: 0; + height: 1px; + background: linear-gradient(90deg, transparent, rgba(0,229,255,0.5), transparent); +} +.footer-v6-grid { + display: grid; + grid-template-columns: 2fr 1fr 1fr; + gap: 4rem; + margin-bottom: 5rem; + max-width: 1400px; + margin-inline: auto; +} +@media (max-width: 968px) { + .footer-v6-grid { grid-template-columns: 1fr; gap: 3rem; } +} +.f-v6-brand h2 { + font-family: 'Syne', var(--font-heading); + font-size: 1.8rem; + font-weight: 800; + color: #fff; + margin-bottom: 1rem; + letter-spacing: -0.03em; +} +.f-v6-brand p { + font-family: 'Plus Jakarta Sans', var(--font-main); + color: var(--text-tertiary); + font-size: 0.9rem; + line-height: 1.8; + max-width: 400px; + margin-bottom: 2rem; +} +.f-v6-badges { + display: flex; + gap: 0.6rem; + flex-wrap: wrap; +} +.f-v6-badge { + padding: 0.3rem 0.8rem; + background: rgba(255,255,255,0.03); + border: 1px solid rgba(255,255,255,0.08); + border-radius: 50px; + font-family: var(--font-mono); + font-size: 0.65rem; + color: var(--text-secondary); + letter-spacing: 0.05em; + text-transform: uppercase; +} +.f-v6-col h3 { + font-family: 'Syne', var(--font-main); + color: #fff; + font-size: 0.9rem; + font-weight: 700; + letter-spacing: 0.1em; + text-transform: uppercase; + margin-bottom: 1.5rem; +} +.f-v6-link { + display: flex; + align-items: center; + gap: 0.5rem; + color: var(--text-tertiary); + font-family: 'Plus Jakarta Sans', var(--font-main); + text-decoration: none; + font-size: 0.9rem; + margin-bottom: 1rem; + transition: color 0.3s ease, transform 0.3s ease; +} +.f-v6-link:hover { + color: var(--accent-electric); + transform: translateX(5px); +} +.f-v6-link.highlight { color: #00E87A; font-weight: 600; } +.f-v6-link.highlight:hover { color: #4ade80; } + +.footer-v6-bottom { + display: flex; + justify-content: space-between; + align-items: center; + padding-top: 2rem; + border-top: 1px solid rgba(255,255,255,0.05); + max-width: 1400px; + margin-inline: auto; + flex-wrap: wrap; + gap: 1.5rem; +} +.f-v6-legal { + font-family: var(--font-mono); + font-size: 0.7rem; + color: rgba(255,255,255,0.4); + display: flex; + align-items: center; + gap: 1rem; +} +.f-v6-status { + display: flex; + align-items: center; + gap: 0.5rem; + background: rgba(0,232,122,0.05); + border: 1px solid rgba(0,232,122,0.2); + padding: 0.4rem 1rem; + border-radius: 50px; +} +.f-v6-dot { + width: 6px; height: 6px; + background: #00E87A; + border-radius: 50%; + animation: pulse 2s ease infinite; +} +.f-v6-status-text { + font-family: var(--font-mono); + font-size: 0.65rem; + color: #00E87A; + letter-spacing: 0.05em; + text-transform: uppercase; +} +.f-v6-telemetry { + font-family: var(--font-mono); + font-size: 0.65rem; + color: rgba(255,255,255,0.3); + display: flex; + align-items: center; + gap: 0.8rem; +} diff --git a/assets/js/dgz-core.js b/assets/js/dgz-core.js index b718a76..f8c065a 100644 --- a/assets/js/dgz-core.js +++ b/assets/js/dgz-core.js @@ -5,7 +5,7 @@ const dgzTranslations = { en: { - nav_home: "Core", + nav_home: "Nucleus", nav_lab: "Spatial Lab", nav_validator: "Command Center", nav_map: "Intel Map", @@ -18,10 +18,16 @@ const dgzTranslations = { system_status: "SYSTEM_LEVEL: SOVEREIGN", api_connected: "API CONNECTED", ai_active: "Groq Llama-3 Active", + // Hero hero_tag: "// SPATIAL_SYSTEMS_ENGINEERING", - hero_h1: "Architecting the Future of Territorial Intelligence & GIS Automation", - hero_desc: "High-Performance Geospatial Engineering & Automated Systems for Multipurpose Cadastre and Precision Mapping.", + hero_available: "Available for Projects · Medellín, Colombia", + hero_h1: "Automating Multipurpose Cadastre & Territorial Intelligence Systems", + hero_desc: "High-Performance Geospatial Engineering & Automated Systems for Multipurpose Cadastre, GovTech, and Territorial Intelligence.", + hero_btn_lab: "⭐ Explore Spatial Lab", + hero_btn_contact: "Link_Direct", + hero_btn_map: "🗺️ Intel Map", + // CV Modal cv_close: "EXIT_INTERFACE", cv_label_profile: "Professional_Identity", @@ -34,61 +40,277 @@ const dgzTranslations = { cv_label_exp_log: "Operational_History", cv_download: "Download Technical Datasheet (PDF)", cv_summary: "Spatial Systems Engineer focused on merging high-precision GIS expertise with robust architectural design and automated software development for territorial management.", - // Project Specifics - proj_automation: "Automation Systems", - proj_geo_llm: "Geo-LLM Intelligence", - proj_gis_dash: "GIS Dashboard", - proj_qgis: "QGIS Power Tools", - proj_research: "Research Lab", - // Validator specifics - val_title: "Catastral Intelligence Command Center", - val_module1: "Topological Engine", - val_module1_desc: "Strict topological validation on base cartography. Detects overlaps, slivers and geometric completeness.", - val_module2: "Interoperability", - val_module2_desc: "Synchronization of Cadastral Database (.shp/.gpkg) vs SNR Registry Matrix (.xlsx) to detect area and identifier inconsistencies.", - val_module3: "Analytics", - val_module3_desc: "Aggregate visualization of mapping health in real-time. Efficiency metrics and recurring error types.", - // Map specifics - map_title: "Spatial Lab — Cadastral Viewer", - map_layers: "Map Layers", - map_basemap: "Base Map", - map_legend: "Legend", - map_info: "Parcel Information", - layer_parcels: "Cadastral Parcels", - layer_roads: "Road Network", - layer_boundary: "Urban Boundary", - layer_labels: "Labels", - map_void_dark: "VOID DARK", - map_geo_sat: "GEO SAT", - map_topo_map: "TOPO MAP", - leg_validated: "Validated Parcel", - leg_pending: "In Progress", - leg_error: "Topological Error", - leg_unclassified: "Unclassified", - map_click_prompt: "Click on a parcel to view attributes", - // Additional UI - system_ready: "GEO_INTELLIGENCE_AGENT_01 // READY", - schema_title: "PostGIS Vector Schema", - back_to_core: "Return to Core", - // Automation Systems - auto_hero_title: "Automation Systems", - auto_hero_desc: "Unattended geospatial transformations replacing hundreds of manual hours.", - auto_init_btn: "Initialize Pipeline", - auto_challenge_title: "The Engineering Challenge", - auto_massive_data: "Massive Data Ingestion", - auto_massive_desc: "Our ETL systems process over 50,000 spatial records per batch, achieving 800% faster speeds.", - // GIS Research - res_hero_title: "GIS Research Lab", - res_hero_desc: "Deep exploration into Sentinel-2 imagery classification and predictive modeling.", - res_raw_data: "RAW_DATA [Sentinel-2]", - res_neural_layer: "DGZ_NEURAL_LAYER [Classified]", - res_urban_detect: "Urban Change Detection", - res_predictive: "Value Prediction Models", - // QGIS Plugin - qgis_hero_title: "QGIS Power Tools", - qgis_hero_desc: "Native desktop GIS mapping fused with automated LADM-COL validation rules.", - qgis_scan_btn: "Scan Layer Errors", - qgis_fix_btn: "Auto-Snap Geometries" + cv_role_1: "Geographic Analyst", + cv_role_2: "GIS Coordinator", + cv_role_3: "QA/QC Specialist", + cv_role_4: "Field Operations Leader", + cv_role_5: "Drone & photogrammetry Lead", + cv_edu_1: "B.S. Spatial Engineering (Candidate)", + cv_edu_2: "Maintenance Technician", + cv_rec_1: "Management Systems", + cv_rec_2: "GIS Environmental Institute", + cv_rec_3: "Electrical Design", + cv_rec_4: "Occupational Health", + + // Capabilities / Services + cap_tag: "Specialized Services", + cap_title: "Specialized solutions for every territorial challenge", + svc_1_title: "Mass Cadastral Update", + svc_1_desc: "Management and validation of 500+ parcels per cycle. Full automation of land recognition workflow under IGAC and LADM-COL V3 standards.", + svc_2_title: "Cadastral QA/QC (Quality Control)", + svc_2_desc: "Technical audits with Python and QGIS. Automatic detection of overlaps, gaps, and topological errors. Certified PDF reports for oversight.", + svc_3_title: "Geospatial ETL Pipelines", + svc_3_desc: "Unattended transformation architecture for massive GIS data ingestion. From field to database with zero manual intervention.", + svc_4_title: "GeoAI — Change Detection", + svc_4_desc: "Machine Learning on Sentinel-2 imagery to detect urban expansion, land cover changes, and multi-temporal territorial mutations.", + svc_5_title: "Enterprise GIS Dashboards", + svc_5_desc: "Interactive interfaces connected to PostGIS for real-time territorial analysis. Visualization of cadastral indicators for executives and entities.", + svc_6_title: "GIS Field Coordination", + svc_6_desc: "Leadership of technical teams in complex terrains. Topographic leveling, mass digitization, and coordination under NSR-10 regulations.", + + // Challenges + chal_tag: "Territorial Challenges", + chal_title: "What I solve for your entity", + chal_desc: "Governments and territorial entities face critical spatial data challenges that hinder development. These are the problems I transform into automated solutions.", + chal_prob_1: "Slow and error-prone manual QA", + chal_sol_1: "Automated validation with Python + PostGIS", + chal_prob_2: "Inconsistent parcel data", + chal_sol_2: "Structured LADM-COL spatial modeling", + chal_prob_3: "Slow digitization processes", + chal_sol_3: "Automated GIS pipelines with GDAL/FME", + chal_prob_4: "Broken cadastral topology", + chal_sol_4: "Automated topological validation engines", + chal_prob_5: "No visibility on land cover changes", + chal_sol_5: "GeoAI: Change detection with Sentinel + Rasterio", + + // About + about_label: "Engineering Core", + about_title: "GIS Excellence & Software Architecture", + about_h3: "GIS EXCELLENCE & SOFTWARE ARCHITECTURE.", + about_p1: "We bridge the gap between traditional GIS mapping and scalable software architecture. Focused on high-precision data flows and technical automation. Our core transforms unstructured geographic data into automated, sovereign territorial intelligence.", + + // Tech DNA + tech_tag: "Tech Stack", + tech_title: "The Engineering Matrix", + tech_infra: "GIS_SPATIAL_INFRA", + tech_infra_desc: "PostGIS topology matrix, LADM-COL V3 compliance, and high-fidelity vector architecture.", + tech_sys: "SYSTEMS_ENGINEERING", + tech_sys_desc: "Backend systems for spatial intelligence, RESTful Node architecture, and automated logic.", + tech_auto: "AUTOMATION_DEVOPS", + tech_auto_desc: "CI/CD pipelines for geospatial assets, containerized spatial nodes, and kernel optimization.", + + // Metrics + metrics_label: "Proven Performance", + metrics_title: "Impact Metrics", + metrics_1_label: "Spatial Records Processed", + metrics_1_sub: "Parcels // Cartography // Cadastre", + metrics_2_label: "Error Margin Reduction", + metrics_2_sub: "QA/QC // Automation // Validation", + metrics_3_label: "Operational Efficiency", + metrics_3_sub: "ROI // Workflow Automation", + metrics_4_label: "Multipurpose Cadastre Projects", + metrics_4_sub: "LADM-COL // IGAC Standards", + + // Lab + lab_title: "Spatial Intelligence Laboratory", + lab_desc: "High-performance geospatial nodes demonstrating territorial automation, multipurpose cadastral engines, and advanced spatial systems engineering.", + lab_live: "The following assets are active demonstrations, not mere placeholders.", + lab_gesture: "Free Trial: Vision Sandbox (Gestural AI)", + lab_p1_title: "Interactive Spatial Node", + lab_p1_desc: "High-precision GIS viewer with real-time parcel rendering. Interacting with local datasets exported from QGIS with deep topology attributes.", + lab_p1_btn: "EXECUTE_VIEWER", + lab_p2_title: "Cadastral Intelligence Engine", + lab_p2_desc: "Automated LADM-COL V3 validator. Detecting overlaps, slivers, and topological inconsistencies using high-fidelity Python kernels.", + lab_p2_btn: "INITIALIZE_VAL_ENGINE", + lab_p2_full: "ACCESS_FULL_INTERFACE", + lab_p3_title: "Sovereign Data Pipelines", + lab_p3_desc: "Unattended ETL architecture for massive geospatial ingestion, ensuring data integrity across distributed GIS nodes.", + + // Projects + proj_label: "// FLAGSHIP_ASSETS", + proj_title: "Technical Deployments", + proj_1_desc: "Enterprise level GIS monitoring for modern urban infrastructure. Synchronizing real-time telemetry across distributed nodes.", + proj_2_title: "Territorial ETL Pipelines", + proj_2_desc: "Unattended automation of geographic data transformations to avoid slow manual processes.", + proj_3_title: "Enterprise GIS Dashboard", + proj_3_desc: "Interactive interface connected to PostGIS for territorial analysis and real-time telemetry.", + proj_4_title: "LADM-COL QGIS Plugin", + proj_4_desc: "Scripts inserted as native tools in the QGIS UI for instantaneous topological validation.", + proj_5_title: "GeoAI Experimental", + proj_5_desc: "Machine Learning prototype lab on Sentinel-2 for urban change detection.", + proj_6_title: "Geo-LLM Intelligence", + proj_6_desc: "Artificial Intelligence agent that translates natural language into Spatial SQL queries and generates statistical cadastral reports in real-time.", + + // GeoAI + geoai_label: "// GEOAI_MODULE", + geoai_title: "GeoAI Intelligence", + geoai_sub: "Urban Change Detection with Sentinel Imagery", + geoai_desc: "Application of geospatial artificial intelligence to detect urban and rural cover changes by comparing multi-temporal satellite images. Complete pipeline from Sentinel download to GeoJSON export.", + geoai_step_1_title: "Sentinel-2 Imagery Download", + geoai_step_1_desc: "Copernicus API // Spectral bands B04, B08, B11 // 10m resolution", + geoai_step_2_title: "Processing with Rasterio + GeoPandas", + geoai_step_2_desc: "Radiometric normalization // Multi-temporal comparison // NumPy arrays", + geoai_step_3_title: "Classification with Scikit-learn", + geoai_step_3_desc: "Random Forest // Change detection // Polygon vectorization", + geoai_step_4_title: "GeoJSON Export → Web GIS", + geoai_step_4_desc: "MapLibre GL visualization // PostGIS integration // REST API", + geoai_status: "[CHANGE_DETECTION_ENGINE] STATUS: PROCESSING_DEMO", + + // Architecture + arch_label: "// ENGINEERING_ARCHITECTURE", + arch_title: "System Architecture", + arch_desc: "Complete architecture of the modern GovTech stack — from field capture to web publication. Designed to scale at municipal, departmental, or national levels.", + arch_node_1: "Field Capture", + arch_node_1_sub: "GPS // 360° Survey // Drones", + arch_node_2: "Python Validation Engine", + arch_node_2_sub: "GDAL // GeoPandas // Shapely", + arch_node_3: "PostGIS Database", + arch_node_3_sub: "PostgreSQL // LADM-COL Schema", + arch_node_4: "FastAPI Spatial API", + arch_node_4_sub: "/validate // /topology // /parcel_score", + arch_node_5: "Web GIS Interface", + arch_node_5_sub: "MapLibre GL JS // Chart.js", + + // Timeline + timeline_label: "Professional Trajectory", + timeline_title: "Professional Trajectory", + timeline_desc: "6+ years of specialized GIS and territorial engineering across Colombia's most complex cadastral projects.", + tl_role_1: "Geographic Analyst", + tl_desc_1: "Advanced geospatial processing via photointerpretation, 360° input analysis and vector restitution. Mass digitization for cadastral update with topological validation under LADM-COL V3.", + tl_role_2: "Cadastral Quality Control (QA/QC)", + tl_desc_2: "Technical quality audits for predial recognition. IGAC QA standards enforcement, topological consistency checks, and field report validation.", + tl_role_3: "GIS Coordinator & Field Leader", + tl_desc_3: "Co-leader of field survey teams. Urban/rural topographic leveling, mass digitization, and GIS coordination for multipurpose cadastre missions in complex terrain.", + tl_role_4: "Professional GIS Coordinator", + tl_desc_4: "LADM-COL standards implementation, mass digitization and GIS coordination for Smart City multipurpose cadastre deployment. QA/QC across distributed field nodes.", + tl_role_5: "GIS Professional — Drones & Geomatics", + tl_desc_5: "Drone-based photogrammetric surveys, predial management, and vector GIS production. Spatial analysis and cartographic outputs for real estate and environmental projects.", + tl_role_6: "Digitizer / Topographer", + tl_desc_6: "Road topography surveys and GIS digitization for vial infrastructure projects. GPS field surveys and cartographic production under professional standards.", + + // Map + map_label: "// SPATIAL_INTEL_WORKSTATION_V6", + map_title: "Territorial Intelligence Hub", + map_desc: "Full-stack GIS workstation — interactive parcel data, toggleable layers, real coordinate intelligence, and live attribute queries. Click any map node to inspect its LADM-COL attributes.", + map_layer_manager: "LAYER_MANAGER", + map_vector_layers: "VECTOR_LAYERS", + map_legend_label: "LEGEND", + map_projection_label: "PROJECTION INFO", + map_identify_hint: "IDENTIFY MODE — Click a feature to inspect", + legend_hq_node: "HQ Node — Active", + legend_field_node: "Field Node — Remote", + legend_parcel_validated: "Parcel — Validated", + legend_parcel_error: "Parcel — Error", + legend_urban_perimeter: "Urban Perimeter", + + // Pricing + pricing_tag: "ACQUIRE_OS_LICENSE // SPATIAL_INTELLIGENCE_SaaS", + pricing_desc: "The most advanced cadastral validation engine in Latin America. Designed for auditing, municipalities, and GIS contractors who need real results.", + pricing_roi_1: "Faster than manual QA", + pricing_roi_2: "Error Reduction", + pricing_roi_3: "Parcels Processed", + pricing_roi_4: "V3 Certified Engine", + pricing_bad_header: "⛔ WITHOUT DGZ SPATIAL OS", + pricing_good_header: "✅ WITH DGZ SPATIAL OS", + pricing_bad_1: "Manual QA: 3-5 days per municipality", + pricing_bad_2: "Undetected topological errors", + pricing_bad_3: "Manual SNR crossing (error-prone)", + pricing_bad_4: "Non-standard Excel reports", + pricing_bad_5: "No traceability on mutation history", + pricing_good_1: "Automatic validation in minutes", + pricing_good_2: "LADM-COL V3 topological engine", + pricing_good_3: "Automatic SNR matricular crossing", + pricing_good_4: "Certified Technical PDF and GeoJSON", + pricing_good_5: "Full traceability of every parcel", + + plan_starter_tier: "STARTER_CORE", + plan_starter_name: "Spatial Explorer", + plan_starter_for: "For GIS professionals, technicians, and advanced students.", + plan_starter_price: "$0", + plan_starter_period: "/month · Forever Free", + plan_starter_cap: "Capacity: 5,000 parcels/layer", + plan_starter_feat_1: "Basic LADM-COL topological validation", + plan_starter_feat_2: "Up to 5,000 parcels per layer", + plan_starter_feat_3: "5 Geo-LLM Reports / day", + plan_starter_feat_4: "Embedded interactive GIS viewer", + plan_starter_no_1: "Physical-Legal SNR Cross-check", + plan_starter_no_2: "FastAPI for automation", + plan_starter_no_3: "Premium technical support", + plan_starter_btn: "Get Started Free", + + plan_pro_tier: "INTERVENTORÍA_PRO", + plan_pro_name: "Spatial Pro", + plan_pro_for: "For handover contracts, massive QA/QC, and technical teams.", + plan_pro_price: "$10", + plan_pro_period: "/month · per user", + plan_pro_cap: "Capacity: Unlimited ∞", + plan_pro_feat_1: "Unlimited parcels (.shp / .gpkg)", + plan_pro_feat_2: "Automatic SNR Matricular Crossing (Excel)", + plan_pro_feat_3: "Certified technical PDF reports", + plan_pro_feat_4: "FastAPI for full automation", + plan_pro_feat_5: "Export GeoJSON + PostGIS ready", + plan_pro_feat_6: "Real-time analytics dashboard", + plan_pro_feat_7: "48h priority technical support", + plan_pro_btn: "Acquire Pro License", + plan_pro_trust: "🔒 Secure payment · Cancel anytime", + + plan_gov_tier: "GOVERNMENT_NODE", + plan_gov_name: "Sovereign Tier", + plan_gov_for: "For Cadastre Offices, IGAC, Municipalities, and large contractors.", + plan_gov_price: "Custom", + plan_gov_period: "/project · On-prem available", + plan_gov_cap: "Enterprise-Scale · Dedicated Infra", + plan_gov_feat_1: "On-Premises deployment (own servers)", + plan_gov_feat_2: "SGDEA / Municipal Tax integration", + plan_gov_feat_3: "Fine-tuned Geo-LLM models for the municipality", + plan_gov_feat_4: "On-site training for the GIS team", + plan_gov_feat_5: "Guaranteed technical SLA (72h response)", + plan_gov_feat_6: "White-label: municipality's own brand", + plan_gov_feat_7: "Full access to source code (license)", + plan_gov_btn: "Contact Architect", + plan_gov_trust: "Response in < 24h · NDA available", + + // Contact + contact_label: "Start a Project", + contact_title: "Let's discuss your
next project.", + contact_desc: "Architecting the next generation of spatial intelligence. Let's discuss your project, municipality, or enterprise GIS challenge.", + contact_email_label: "Email", + contact_loc_label: "Location", + contact_linkedin_label: "LinkedIn", + contact_github_label: "GitHub", + form_name: "Full_Name", + form_placeholder_name: "Organization / Lead Architect", + form_email: "Email_Uplink", + form_service: "Service_Type", + form_service_select: "— Select Service —", + form_svc_1: "GIS Automation & ETL Pipeline", + form_svc_2: "Cadastral QA/QC (LADM-COL)", + form_svc_3: "Enterprise GIS Dashboard", + form_svc_4: "GeoAI / Satellite Analysis", + form_svc_5: "Government / Municipal Project", + form_svc_6: "Other / Consulting", + form_msg: "Mission_Brief", + form_placeholder_msg: "Define mission parameters & objectives...", + form_btn: "INITIATE_HANDSHAKE", + + // Footer + footer_copy: "© 2026 ALBERT DANIEL GAVIRIA ZAPATA. ALL RIGHTS RESERVED.", + + boot_1: "DGZ_OS_v5.2_SOVEREIGN [BOOT_SEQUENCE_INIT]", + boot_2: "Mounting PostGIS Vector Engine... OK", + boot_3: "Initializing LADM-COL Validation Matrix...", + boot_4: "Hydrating Spatial Intelligence Layers...", + boot_5: "Handshaker: Geo-LLM Proxy Status... CONNECTED", + boot_6: "DGZ_CORE: Decrypting Sovereign Credentials...", + boot_7: "ACCESS_GRANTED: Welcome, Engineer Zapata.", + + demo_btn_run: "RUN_VALIDATION_DEMO", + demo_btn_processing: "PROCESSING_NODES...", + demo_log_1: "[ENGINE] Initializing LADM-COL V3 ruleset...", + demo_log_2: "Extracting geometries via GeoPandas...", + demo_log_3: "Checking overlap matrix (O(n log n))...", + demo_log_4: "Analyzing sliver polygons < 0.05m2...", + demo_log_5: "CONSULTING_DGZ_ENGINE: PASS" }, es: { nav_home: "Núcleo", @@ -104,10 +326,16 @@ const dgzTranslations = { system_status: "NIVEL_SISTEMA: SOBERANO", api_connected: "API CONECTADA", ai_active: "Groq Llama-3 Activo", + // Hero hero_tag: "// INGENIERÍA_DE_SISTEMAS_ESPACIALES", - hero_h1: "Arquitectando el Futuro de la Inteligencia Territorial y Automatización SIG", - hero_desc: "Ingeniería Geoespacial de Alto Rendimiento y Sistemas Automatizados para Catastro Multipropósito y Mapeo de Precisión.", + hero_available: "Disponible para Proyectos · Medellín, Colombia", + hero_h1: "Automatizando Sistemas de Inteligencia Territorial y Catastro Multipropósito", + hero_desc: "Ingeniería Geoespacial de Alto Rendimiento y Sistemas Automatizados para Catastro Multipropósito, GovTech e Inteligencia Territorial.", + hero_btn_lab: "⭐ Explorar Lab Espacial", + hero_btn_contact: "Enlace_Directo", + hero_btn_map: "🗺️ Mapa Intel", + // CV Modal cv_close: "SALIR_INTERFAZ", cv_label_profile: "Identidad_Profesional", @@ -120,61 +348,277 @@ const dgzTranslations = { cv_label_exp_log: "Historial_Operativo", cv_download: "Descargar Datasheet Técnico (PDF)", cv_summary: "Ingeniero de Sistemas Espaciales enfocado en fusionar la experiencia SIG de alta precisión con un diseño arquitectónico robusto y desarrollo de software automatizado para la gestión territorial.", - // Project Specifics - proj_automation: "Sistemas de Automatización", - proj_geo_llm: "Inteligencia Geo-LLM", - proj_gis_dash: "Tablero GIS", - proj_qgis: "Herramientas QGIS", - proj_research: "Laboratorio de Investigación", - // Validator specifics - val_title: "Centro de Mando de Inteligencia Catastral", - val_module1: "Motor Topológico", - val_module1_desc: "Validación topológica estricta sobre cartografía base. Detecta traslapes, slivers y completitud geométrica.", - val_module2: "Interoperabilidad", - val_module2_desc: "Sincronización de Base de Datos Catastral (.shp/.gpkg) vs Matriz de Registro SNR (.xlsx) para detectar inconsistencias.", - val_module3: "Analítica", - val_module3_desc: "Visualización agregada de la salud cartográfica en tiempo real. Métricas de eficiencia y errores recurrentes.", - // Map specifics - map_title: "Lab Espacial — Visor Catastral", - map_layers: "Capas del Mapa", - map_basemap: "Mapa Base", - map_legend: "Leyenda", - map_info: "Información Predial", - layer_parcels: "Predios Catastrales", - layer_roads: "Red Vial", - layer_boundary: "Límite Urbano", - layer_labels: "Etiquetas", - map_void_dark: "OSCURO", - map_geo_sat: "SATÉLITE", - map_topo_map: "TOPOGRÁFICO", - leg_validated: "Validado", - leg_pending: "En Proceso", - leg_error: "Fallo Topológico", - leg_unclassified: "Sin Clasificar", - map_click_prompt: "Click en un predio para ver atributos", - // Additional UI - system_ready: "GEO_INTELLIGENCE_AGENT_01 // LISTO", - schema_title: "Esquema Vectorial PostGIS", - back_to_core: "Volver al Núcleo", - // Automation Systems - auto_hero_title: "Sistemas de Automatización", - auto_hero_desc: "Transformaciones geoespaciales desatendidas que reemplazan cientos de horas manuales.", - auto_init_btn: "Inicializar Pipeline", - auto_challenge_title: "El Desafío de Ingeniería", - auto_massive_data: "Ingesta Masiva de Datos", - auto_massive_desc: "Nuestros sistemas ETL procesan más de 50.000 registros espaciales por lote, logrando velocidades un 800% superiores.", - // GIS Research - res_hero_title: "Laboratorio de Investigación GIS", - res_hero_desc: "Exploración profunda en clasificación de imágenes Sentinel-2 y modelado predictivo.", - res_raw_data: "DATOS_BRUTOS [Sentinel-2]", - res_neural_layer: "DGZ_CAPA_NEURAL [Clasificado]", - res_urban_detect: "Detección de Cambios Urbanos", - res_predictive: "Modelos Predictivos de Valor", - // QGIS Plugin - qgis_hero_title: "Herramientas QGIS", - qgis_hero_desc: "Cartografía GIS nativa de escritorio fusionada con reglas de validación LADM-COL.", - qgis_scan_btn: "Escanear Errores", - qgis_fix_btn: "Auto-Corregir Geometrías" + cv_role_1: "Analista Geográfico", + cv_role_2: "Coordinador Profesional SIG", + cv_role_3: "Control Calidad Catastral (QA/QC)", + cv_role_4: "Coordinador & Líder de Campo", + cv_role_5: "Profesional SIG (Drones & GIS)", + cv_edu_1: "Ingeniería de Sistemas (C)", + cv_edu_2: "Técnico Conservación", + cv_rec_1: "Sistemas de Gestión", + cv_rec_2: "Instituto Ambiental SIG", + cv_rec_3: "Diseño Eléctrico", + cv_rec_4: "Seguridad y Salud", + + // Capabilities / Services + cap_tag: "Servicios Especializados", + cap_title: "Soluciones especializadas para cada desafío territorial", + svc_1_title: "Actualización Catastral Masiva", + svc_1_desc: "Gestión y validación de +500 predios por ciclo. Automatización completa del flujo de reconocimiento predial bajo estándares IGAC y LADM-COL V3.", + svc_2_title: "QA/QC Catastral (Control de Calidad)", + svc_2_desc: "Auditorías técnicas con Python y QGIS. Detección automática de solapamientos, huecos y errores topológicos. Reportes PDF certificados para interventoría.", + svc_3_title: "Pipelines ETL Geoespaciales", + svc_3_desc: "Arquitectura de transformación desatendida para ingesta masiva de datos GIS. De campo a base de datos sin intervención manual.", + svc_4_title: "GeoAI — Detección de Cambios", + svc_4_desc: "Machine Learning sobre imagenería Sentinel-2 para detectar expansión urbana, cambios de cobertura y mutaciones territoriales multitemporales.", + svc_5_title: "Dashboards GIS Empresariales", + svc_5_desc: "Interfaces interactivas conectadas a PostGIS para análisis territorial en tiempo real. Visualización de indicadores catastrales para directivos y entidades.", + svc_6_title: "Coordinación de Campo GIS", + svc_6_desc: "Liderazgo de equipos técnicos en terrenos complejos. Nivelación topográfica, digitalización masiva y coordinación bajo normativa NSR-10.", + + // Challenges + chal_tag: "Desafíos Territoriales", + chal_title: "Lo que resuelvo para su entidad", + chal_desc: "Los gobiernos y entidades territoriales enfrentan desafíos críticos de datos espaciales que frenan el desarrollo. Estos son los problemas que transformo en soluciones automatizadas.", + chal_prob_1: "QA manual lento y propenso a errores", + chal_sol_1: "Validación automática con Python + PostGIS", + chal_prob_2: "Datos prediales inconsistentes", + chal_sol_2: "Modelado espacial LADM-COL estructurado", + chal_prob_3: "Procesos de digitalización lentos", + chal_sol_3: "Pipelines SIG automatizados con GDAL/FME", + chal_prob_4: "Topología catastral rota", + chal_sol_4: "Motores de validación topológica automatizados", + chal_prob_5: "Sin visibilidad sobre cambios de cobertura", + chal_sol_5: "GeoAI: detección cambios con Sentinel + Rasterio", + + // About + about_label: "Núcleo de Ingeniería", + about_title: "Excelencia GIS & Arquitectura de Software", + about_h3: "EXCELENCIA SIG Y ARQUITECTURA DE SOFTWARE.", + about_p1: "Cerramos la brecha entre el mapeo SIG tradicional y la arquitectura de software escalable. Enfocados en flujos de datos de alta precisión y automatización técnica. Nuestro núcleo transforma datos geográficos sin estructura en inteligencia territorial automatizada y soberana.", + + // Tech DNA + tech_tag: "Stack Tecnológico", + tech_title: "La Matriz de Ingeniería", + tech_infra: "INFRAESTRUCTURA_ESPACIAL_SIG", + tech_infra_desc: "Matriz de topología PostGIS, cumplimiento LADM-COL V3 y arquitectura vectorial de alta fidelidad.", + tech_sys: "INGENIERÍA_DE_SISTEMAS", + tech_sys_desc: "Sistemas backend para inteligencia espacial, arquitectura Node RESTful y lógica automatizada.", + tech_auto: "AUTOMATIZACIÓN_DEVOPS", + tech_auto_desc: "Pipelines CI/CD para activos geoespaciales, nodos espaciales en contenedores y optimización de kernel.", + + // Metrics + metrics_label: "Rendimiento Comprobado", + metrics_title: "Métricas de Impacto", + metrics_1_label: "Registros Espaciales Procesados", + metrics_1_sub: "Predios // Cartografía // Catastro", + metrics_2_label: "Reducción del Margen de Error", + metrics_2_sub: "QA/QC // Automatización // Validación", + metrics_3_label: "Eficiencia Operativa", + metrics_3_sub: "ROI // Automatización de Workflow", + metrics_4_label: "Proyectos de Catastro Multipropósito", + metrics_4_sub: "LADM-COL // Estándares IGAC", + + // Lab + lab_title: "Laboratorio de Inteligencia Espacial", + lab_desc: "Nodos geoespaciales de alto rendimiento que demuestran automatización territorial, motores catastrales multipropósito e ingeniería de sistemas espaciales avanzada.", + lab_live: "Los siguientes activos son demostraciones activas, no simples marcadores.", + lab_gesture: "Prueba Gratuita: Vision Sandbox (IA Gestual)", + lab_p1_title: "Nodo Espacial Interactivo", + lab_p1_desc: "Visor SIG de alta precisión con renderizado de predios en tiempo real. Interactuando con datasets locales exportados de QGIS con atributos topológicos profundos.", + lab_p1_btn: "EJECUTAR_VISOR", + lab_p2_title: "Motor de Inteligencia Catastral", + lab_p2_desc: "Validador automatizado LADM-COL V3. Detección de traslapes, slivers e inconsistencias topológicas utilizando kernels de Python de alta fidelidad.", + lab_p2_btn: "INICIALIZAR_MOTOR_VAL", + lab_p2_full: "ACCEDER_INTERFAZ_COMPLETA", + lab_p3_title: "Pipelines de Datos Soberanos", + lab_p3_desc: "Arquitectura ETL desatendida para ingesta geoespacial masiva, garantizando la integridad de los datos en nodos SIG distribuidos.", + + // Projects + proj_label: "// ACTIVOS_ESTRELLA", + proj_title: "Despliegues Técnicos", + proj_1_desc: "Monitoreo SIG de nivel empresarial para infraestructura urbana moderna. Sincronización de telemetría en tiempo real entre nodos distribuidos.", + proj_2_title: "Pipelines ETL Territoriales", + proj_2_desc: "Automatización desatendida de transformaciones de datos geográficos para evitar procesos manuales lentos.", + proj_3_title: "Tablero GIS Empresarial", + proj_3_desc: "Interfaz interactiva conectada a PostGIS para análisis territorial y telemetría en tiempo real.", + proj_4_title: "Plugin QGIS LADM-COL", + proj_4_desc: "Scripts insertados como herramientas nativas en la UI de QGIS para validación topológica instantánea.", + proj_5_title: "GeoAI Experimental", + proj_5_desc: "Laboratorio de prototipos de Machine Learning sobre Sentinel-2 para detección de cambios urbanos.", + proj_6_title: "Inteligencia Geo-LLM", + proj_6_desc: "Agente de Inteligencia Artificial que traduce lenguaje natural a sentencias Spatial SQL y genera informes catastrales estadísticos en tiempo real.", + + // GeoAI + geoai_label: "// MÓDULO_GEOAI", + geoai_title: "Inteligencia GeoAI", + geoai_sub: "Detección de Cambios Urbanos con Imágenes Sentinel", + geoai_desc: "Aplicación de inteligencia artificial geoespacial para detectar cambios de cobertura urbana y rural comparando imágenes satelitales multitemporales. Pipeline completo desde descarga Sentinel hasta exportación GeoJSON.", + geoai_step_1_title: "Descarga de Imágenes Sentinel-2", + geoai_step_1_desc: "API Copernicus // Bandas espectrales B04, B08, B11 // resolución 10m", + geoai_step_2_title: "Procesamiento con Rasterio + GeoPandas", + geoai_step_2_desc: "Normalización radiométrica // Comparación multitemporal // NumPy arrays", + geoai_step_3_title: "Clasificación con Scikit-learn", + geoai_step_3_desc: "Random Forest // Detección de cambios // Vectorización de polígonos", + geoai_step_4_title: "Exportación GeoJSON → Web GIS", + geoai_step_4_desc: "Visualización en MapLibre GL // Integración PostGIS // API REST", + geoai_status: "[MOTOR_DETECCIÓN_CAMBIOS] ESTADO: PROCESANDO_DEMO", + + // Architecture + arch_label: "// ARQUITECTURA_DE_INGENIERÍA", + arch_title: "Arquitectura del Sistema", + arch_desc: "Arquitectura completa del stack GovTech moderno — desde captura en campo hasta publicación web. Diseñado para escalar a nivel municipal, departamental o nacional.", + arch_node_1: "Captura en Campo", + arch_node_1_sub: "GPS // Levantamiento 360° // Drones", + arch_node_2: "Motor de Validación Python", + arch_node_2_sub: "GDAL // GeoPandas // Shapely", + arch_node_3: "Base de Datos PostGIS", + arch_node_3_sub: "PostgreSQL // Esquema LADM-COL", + arch_node_4: "FastAPI Spatial API", + arch_node_4_sub: "/validate // /topology // /parcel_score", + arch_node_5: "Interfaz Web GIS", + arch_node_5_sub: "MapLibre GL JS // Chart.js", + + // Timeline + timeline_label: "Trayectoria Profesional", + timeline_title: "Trayectoria Profesional", + timeline_desc: "Más de 6 años de experiencia especializada en SIG e ingeniería territorial en los proyectos catastrales más complejos de Colombia.", + tl_role_1: "Analista Geográfico", + tl_desc_1: "Procesamiento avanzado de información geoespacial mediante fotointerpretación, análisis de insumos 360° y restitución vectorial. Digitalización masiva para actualización catastral con validación topológica bajo LADM-COL V3.", + tl_role_2: "Control de Calidad Catastral (QA/QC)", + tl_desc_2: "Auditorías de calidad técnica para reconocimiento predial. Aplicación de estándares de calidad IGAC, comprobaciones de consistencia topológica y validación de informes de campo.", + tl_role_3: "Coordinador SIG y Líder de Campo", + tl_desc_3: "Co-líder de equipos de levantamiento de campo. Nivelación topográfica urbana/rural, digitalización masiva y coordinación SIG para misiones de catastro multipropósito en terrenos complejos.", + tl_role_4: "Coordinador Profesional SIG", + tl_desc_4: "Implementación de estándares LADM-COL, digitalización masiva y coordinación SIG para despliegue de catastro multipropósito en Smart Cities. QA/QC en nodos de campo distribuidos.", + tl_role_5: "Profesional SIG — Drones y Geomática", + tl_desc_5: "Levantamientos fotogramétricos con drones, gestión predial y producción SIG vectorial. Análisis espacial y salidas cartográficas para proyectos inmobiliarios y ambientales.", + tl_role_6: "Digitalizador / Topógrafo", + tl_desc_6: "Levantamientos topográficos viales y digitalización SIG para proyectos de infraestructura víal. Levantamientos de campo con GPS y producción cartográfica bajo estándares profesionales.", + + // Map + map_label: "// ESTACIÓN_DE_TRABAJO_INTEL_ESPACIAL_V6", + map_title: "Hub de Inteligencia Territorial", + map_desc: "Estación de trabajo SIG completa — datos prediales interactivos, capas conmutables, inteligencia de coordenadas reales y consultas de atributos en vivo. Haga clic en cualquier nodo del mapa para inspeccionar sus atributos LADM-COL.", + map_layer_manager: "GESTOR_CAPAS", + map_vector_layers: "CAPAS_VECTORIALES", + map_legend_label: "LEYENDA", + map_projection_label: "INFO PROYECCIÓN", + map_identify_hint: "MODO IDENTIFICAR — Click en un elemento para inspeccionar", + legend_hq_node: "Nodo HQ — Activo", + legend_field_node: "Nodo Campo — Remoto", + legend_parcel_validated: "Predio — Validado", + legend_parcel_error: "Predio — Error", + legend_urban_perimeter: "Perímetro Urbano", + + // Pricing + pricing_tag: "ADQUIRIR_LICENCIA_OS // Inteligencia_Espacial_SaaS", + pricing_desc: "El motor de validación catastral más avanzado de Latinoamérica. Diseñado para interventorías, municipios y contratistas SIG que necesitan resultados reales.", + pricing_roi_1: "Más rápido que QA manual", + pricing_roi_2: "Reducción de Errores", + pricing_roi_3: "Predios Procesados", + pricing_roi_4: "Motor Certificado V3", + pricing_bad_header: "⛔ SIN DGZ SPATIAL OS", + pricing_good_header: "✅ CON DGZ SPATIAL OS", + pricing_bad_1: "QA manual: 3-5 días por municipio", + pricing_bad_2: "Errores topológicos sin detectar", + pricing_bad_3: "Cruce SNR manual (propenso a error)", + pricing_bad_4: "Reportes en Excel sin estándar", + pricing_bad_5: "Sin trazabilidad sobre historia de mutaciones", + pricing_good_1: "Validación automática en minutos", + pricing_good_2: "Motor topológico LADM-COL V3", + pricing_good_3: "Cruce matricular automático SNR", + pricing_good_4: "PDF Técnico y GeoJSON certificado", + pricing_good_5: "Trazabilidad completa de cada predio", + + plan_starter_tier: "STARTER_CORE", + plan_starter_name: "Explorador Espacial", + plan_starter_for: "Para profesionales SIG, técnicos y estudiantes avanzados.", + plan_starter_price: "$0", + plan_starter_period: "/mes · Gratis por Siempre", + plan_starter_cap: "Capacidad: 5,000 predios/capa", + plan_starter_feat_1: "Validación topológica LADM-COL básica", + plan_starter_feat_2: "Hasta 5,000 predios por capa", + plan_starter_feat_3: "5 Reportes Geo-LLM / día", + plan_starter_feat_4: "Visor GIS interactivo embebido", + plan_starter_no_1: "Cruce Físico-Jurídico SNR", + plan_starter_no_2: "API FastAPI para automatización", + plan_starter_no_3: "Soporte técnico premium", + plan_starter_btn: "Comenzar Gratis", + + plan_pro_tier: "INTERVENTORÍA_PRO", + plan_pro_name: "Spatial Pro", + plan_pro_for: "Para contratos de empalme, QA/QC masivo y equipos técnicos.", + plan_pro_price: "$10", + plan_pro_period: "/mes · por usuario", + plan_pro_cap: "Capacidad: Ilimitada ∞", + plan_pro_feat_1: "Predios ilimitados (.shp / .gpkg)", + plan_pro_feat_2: "Cruce Matricular SNR automático (Excel)", + plan_pro_feat_3: "Reportes PDF técnicos certificados", + plan_pro_feat_4: "FastAPI para automatización completa", + plan_pro_feat_5: "Export GeoJSON + PostGIS ready", + plan_pro_feat_6: "Dashboard de analítica en tiempo real", + plan_pro_feat_7: "Soporte técnico prioritario 48h", + plan_pro_btn: "Adquirir Licencia Pro", + plan_pro_trust: "🔒 Pago seguro · Cancela cuando quieras", + + plan_gov_tier: "NODO_GUBERNAMENTAL", + plan_gov_name: "Nivel Soberano", + plan_gov_for: "Para Oficinas de Catastro, IGAC, Alcaldías y grandes contratistas.", + plan_gov_price: "Personalizado", + plan_gov_period: "/proyecto · On-prem disponible", + plan_gov_cap: "Escala Empresarial · Infra Dedicada", + plan_gov_feat_1: "Despliegue On-Premises (servidores propios)", + plan_gov_feat_2: "Integración SGDEA / Rentas municipales", + plan_gov_feat_3: "Modelos Geo-LLM ajustados al municipio", + plan_gov_feat_4: "Capacitación presencial del equipo SIG", + plan_gov_feat_5: "SLA técnico garantizado (72h respuesta)", + plan_gov_feat_6: "White-label: marca propia del municipio", + plan_gov_feat_7: "Acceso total al código fuente (licencia)", + plan_gov_btn: "Contactar Arquitecto", + plan_gov_trust: "Respuesta en < 24h · NDA disponible", + + // Contact + contact_label: "Iniciar Proyecto", + contact_title: "Hablemos de su
próximo proyecto.", + contact_desc: "Arquitectando la próxima generación de inteligencia espacial. Hablemos de su proyecto, municipio o desafío SIG empresarial.", + contact_email_label: "Correo", + contact_loc_label: "Ubicación", + contact_linkedin_label: "LinkedIn", + contact_github_label: "GitHub", + form_name: "Nombre_Completo", + form_placeholder_name: "Organización / Arquitecto Líder", + form_email: "Enlace_Email", + form_service: "Tipo_de_Servicio", + form_service_select: "— Seleccionar Servicio —", + form_svc_1: "Automatización SIG y Pipeline ETL", + form_svc_2: "QA/QC Catastral (LADM-COL)", + form_svc_3: "Tablero GIS Empresarial", + form_svc_4: "GeoAI / Análisis Satelital", + form_svc_5: "Proyecto Gubernamental / Municipal", + form_svc_6: "Otro / Consultoría", + form_msg: "Breve_de_Misión", + form_placeholder_msg: "Defina parámetros y objetivos de la misión...", + form_btn: "INICIAR_HANDSHAKE", + + // Footer + footer_copy: "© 2026 ALBERT DANIEL GAVIRIA ZAPATA. TODOS LOS DERECHOS RESERVADOS.", + + boot_1: "DGZ_OS_v5.2_SOBERANO [INICIANDO_SECUENCIA_ARRANQUE]", + boot_2: "Montando Motor Vectorial PostGIS... OK", + boot_3: "Inicializando Matriz de Validación LADM-COL...", + boot_4: "Hidratando Capas de Inteligencia Espacial...", + boot_5: "Estado Proxy Geo-LLM... CONECTADO", + boot_6: "DGZ_CORE: Desencriptando Credenciales Soberanas...", + boot_7: "ACCESO_CONCEDIDO: Bienvenido, Ingeniero Zapata.", + + demo_btn_run: "EJECUTAR_DEMO_VALIDACIÓN", + demo_btn_processing: "PROCESANDO_NODOS...", + demo_log_1: "[MOTOR] Inicializando reglas LADM-COL V3...", + demo_log_2: "Extrayendo geometrías vía GeoPandas...", + demo_log_3: "Comprobando matriz de traslapes (O(n log n))...", + demo_log_4: "Analizando polígonos astilla < 0.05m2...", + demo_log_5: "CONSULTANDO_MOTOR_DGZ: PASA" } }; @@ -189,7 +633,7 @@ class DGZCore { this.injectHeader(); this.applyTranslations(); this.setupListeners(); - + // Fast-load check: if skip_intro is in URL or was recently seen if (window.location.search.includes('skip_intro') || sessionStorage.getItem('dgz_skip_intro')) { const intro = document.getElementById('terminal-intro'); @@ -198,7 +642,7 @@ class DGZCore { } else { this.handleTerminal(); } - + this.setupMouseTracking(); }); } @@ -212,7 +656,7 @@ class DGZCore { const y = (e.clientY / window.innerHeight) * 100; document.documentElement.style.setProperty('--mouse-x', `${x}%`); document.documentElement.style.setProperty('--mouse-y', `${y}%`); - + const cursor = document.querySelector('.cursor-main'); const trail = document.querySelector('.cursor-trail'); if (cursor && trail) { @@ -232,15 +676,16 @@ class DGZCore { // Reset terminal terminal.innerHTML = ''; + const t = dgzTranslations[this.lang]; const lines = [ - { text: "DGZ_OS_v5.2_SOVEREIGN [BOOT_SEQUENCE_INIT]", color: "var(--accent-cyan)" }, - { text: "Mounting PostGIS Vector Engine... OK", color: "#fff" }, - { text: "Initializing LADM-COL Validation Matrix...", color: "#fff" }, - { text: "Hydrating Spatial Intelligence Layers...", color: "#fff" }, - { text: "Handshaker: Geo-LLM Proxy Status... CONNECTED", color: "var(--status-ok)" }, - { text: "DGZ_CORE: Decrypting Sovereign Credentials...", color: "#fff" }, - { text: "ACCESS_GRANTED: Welcome, Engineer Zapata.", color: "var(--accent-electric)" } + { text: t.boot_1, color: "var(--accent-cyan)" }, + { text: t.boot_2, color: "#fff" }, + { text: t.boot_3, color: "#fff" }, + { text: t.boot_4, color: "#fff" }, + { text: t.boot_5, color: "var(--status-ok)" }, + { text: t.boot_6, color: "#fff" }, + { text: t.boot_7, color: "var(--accent-electric)" } ]; for (const line of lines) { @@ -248,7 +693,7 @@ class DGZCore { div.className = 'terminal-line'; div.style.color = line.color; terminal.appendChild(div); - + for (let i = 0; i < line.text.length; i++) { div.textContent += line.text[i]; await new Promise(r => setTimeout(r, 10)); @@ -258,7 +703,7 @@ class DGZCore { setTimeout(() => { const intro = document.getElementById('terminal-intro'); - if(intro) intro.classList.add('hidden'); + if (intro) intro.classList.add('hidden'); sessionStorage.setItem('dgz_skip_intro', 'true'); this.startTelemetry(); }, 800); @@ -281,7 +726,6 @@ class DGZCore { const header = document.createElement('header'); header.id = 'dgz-global-header'; header.className = 'dgz-nav-master'; - header.className = 'dgz-nav-master'; const isSubDir = window.location.pathname.includes('/projects/') || window.location.pathname.includes('/lab/'); const rootPath = isSubDir ? (window.location.pathname.includes('/projects/') ? '../../' : '../') : './'; @@ -503,8 +947,10 @@ class DGZCore { if (t[key]) { if (el.tagName === 'INPUT' || el.tagName === 'TEXTAREA') { el.placeholder = t[key]; + } else if (el.tagName === 'SELECT') { + // Logic for select options if needed } else { - el.textContent = t[key]; + el.innerHTML = t[key]; } } }); @@ -521,15 +967,18 @@ async function runCadastralValidation() { const output = document.getElementById('validator-output'); if (!btn || !output) return; + const currentLang = localStorage.getItem('dgz_lang') || 'es'; + const t = dgzTranslations[currentLang]; + btn.disabled = true; - btn.textContent = "PROCESSING_NODES..."; - output.innerHTML = '
[ENGINE] Initializing LADM-COL V3 ruleset...
'; + btn.textContent = t.demo_btn_processing; + output.innerHTML = `
${t.demo_log_1}
`; const phases = [ - { t: "Extracting geometries via GeoPandas...", c: "#fff" }, - { t: "Checking overlap matrix (O(n log n))...", c: "#fff" }, - { t: "Analyzing sliver polygons < 0.05m2...", c: "#fff" }, - { t: "CONSULTING_DGZ_ENGINE: PASS", c: "var(--status-ok)" } + { t: t.demo_log_2, c: "#fff" }, + { t: t.demo_log_3, c: "#fff" }, + { t: t.demo_log_4, c: "#fff" }, + { t: t.demo_log_5, c: "var(--status-ok)" } ]; for (const p of phases) { diff --git a/assets/js/main.js b/assets/js/main.js index 9163888..945c043 100644 --- a/assets/js/main.js +++ b/assets/js/main.js @@ -6,17 +6,20 @@ const translations = { nav_contact: "Contact", nav_cv: "CV", hero_tag: "// SPATIAL_SYSTEMS_ENGINEERING", - hero_title: "DESIGNING THE FUTURE OF SPATIAL DATA", + hero_h1: "Automating Multipurpose Cadastre & Territorial Intelligence Systems", + hero_title: "Automating Multipurpose Cadastre & Territorial Intelligence Systems", hero_desc: "High-Performance Geospatial Engineering & Automated Systems Architecture.", hero_btn_projects: "Explore System", - hero_btn_contact: "Link_Direct", + hero_btn_contact: "Contact", + lab_live: "DGZ SPATIAL LAB · LIVE DEMOS", + lab_gesture: "Free Trial: Vision Sandbox (Gestural AI)", terminal_init: "LOADING_CORE_SYSTEM...", terminal_status_1: "FETCH_SPATIAL_NODES...", terminal_status_2: "SYNC_POSTGIS_BUFFERS...", terminal_status_3: "BYPASS_MANUAL_LIMITS...", terminal_done: "ACCESS_GRANTED.", - about_label: "// ARCHITECTURE", - about_title: "The Engineering Core", + about_label: "Engineering Core", + about_title: "GIS Excellence & Software Architecture", about_p1: "We bridge the gap between traditional GIS mapping and scalable software architecture. Focused on high-precision data workflows and technical automation.", proj_label: "// FLAGSHIP_ASSETS", cv_header: "BIO-NEURAL CV // ALBERT GAVIRIA V5.2", @@ -36,8 +39,8 @@ const translations = { form_email: "Email_Uplink", form_msg: "Mission_Brief", form_btn: "INITIATE_HANDSHAKE", - metrics_label: "// PROVEN_PERFORMANCE", - metrics_title: "Neural Impact Core", + metrics_label: "Proven Performance", + metrics_title: "Impact Metrics", metric_roi: "Operational Efficiency", metric_error: "Error Margin Reduction", metric_team: "Personnel Managed", @@ -64,7 +67,7 @@ const translations = { map_label: "// SPATIAL_INTEL_WORKSTATION_V6", map_title: "Territorial Intelligence Hub", map_desc: "Full-stack GIS workstation — interactive parcel data, toggleable layers, real coordinate intelligence. Click any map node to inspect LADM-COL attributes.", - timeline_label: "// TEMPORAL_NODES", + timeline_label: "Professional Trajectory", timeline_title: "Professional Trajectory", timeline_desc: "6+ years of specialized GIS and territorial engineering across Colombia's most complex cadastral projects.", tl_role_1: "Geographic Analyst", @@ -79,9 +82,9 @@ const translations = { tl_desc_5: "Drone-based photogrammetric surveys, predial management, and vector GIS production. Spatial analysis and cartographic outputs for real estate and environmental projects.", tl_role_6: "Digitizer / Topographer", tl_desc_6: "Road topography surveys and GIS digitization for vial infrastructure projects. GPS field surveys and cartographic production under professional standards.", - contact_title: "INITIATE\nCONNECTION.", - contact_desc: "Architecting the next generation of spatial intelligence. Let's discuss your project, municipality, or enterprise GIS challenge.", - contact_label: "// PROJECT_INITIATION", + contact_title: "Let's discuss your
next project.", + contact_desc: "Next-generation territorial intelligence. Let's discuss your project, municipality, or enterprise GIS challenge.", + contact_label: "Start a Project", contact_email_label: "Email", contact_loc_label: "Location", form_service: "Service_Type", @@ -92,7 +95,22 @@ const translations = { form_svc_4: "GeoAI / Satellite Analysis", form_svc_5: "Government / Municipal Project", form_svc_6: "Other / Consulting", - cv_download: "Download Datasheet (PDF)" + cv_download: "Download Datasheet (PDF)", + hero_available: "Available for Projects · Medellín, Colombia", + cap_tag: "Specialized Services", + cap_title: "Specialized solutions for every territorial challenge", + svc_1_title: "Mass Cadastral Update", + svc_1_desc: "Management and validation of 500+ parcels per cycle. Full automation of land recognition workflow under IGAC and LADM-COL V3 standards.", + svc_2_title: "Cadastral QA/QC (Quality Control)", + svc_2_desc: "Technical audits with Python and QGIS. Automatic detection of overlaps, gaps, and topological errors. Certified PDF reports for oversight.", + svc_3_title: "Geospatial ETL Pipelines", + svc_3_desc: "Unattended transformation architecture for massive GIS data ingestion. From field to database with zero manual intervention.", + svc_4_title: "GeoAI — Change Detection", + svc_4_desc: "Machine Learning on Sentinel-2 imagery to detect urban expansion, land cover changes, and multi-temporal territorial mutations.", + svc_5_title: "Enterprise GIS Dashboards", + svc_5_desc: "Interactive interfaces connected to PostGIS for real-time territorial analysis. Visualization of cadastral indicators for executives and entities.", + svc_6_title: "GIS Field Coordination", + svc_6_desc: "Leadership of technical teams in complex terrains. Topographic leveling, mass digitization, and coordination under NSR-10 regulations." }, es: { nav_about: "Sobre Mí", @@ -101,17 +119,20 @@ const translations = { nav_contact: "Contacto", nav_cv: "CV", hero_tag: "// INGENIERÍA DE SISTEMAS ESPACIALES", - hero_title: "DISEÑANDO EL FUTURO DE LOS DATOS ESPACIALES", + hero_h1: "Automatizando Sistemas de Inteligencia Territorial y Catastro Multipropósito", + hero_title: "Automatizando Sistemas de Inteligencia Territorial y Catastro Multipropósito", hero_desc: "Ingeniería geoespacial de alto rendimiento y arquitectura de sistemas automatizados.", hero_btn_projects: "Explorar Sistema", hero_btn_contact: "Contactame", + lab_live: "DGZ SPATIAL LAB · LIVE DEMOS", + lab_gesture: "Prueba Gratuita: Vision Sandbox (IA Gestual)", terminal_init: "CARGANDO_SISTEMA_CORE...", terminal_status_1: "RECUPERANDO_NODOS...", terminal_status_2: "SINCRONIZANDO_POSTGIS...", terminal_status_3: "OMITIENDO_LIMITES_MANUALES...", terminal_done: "ACCESO_CONCEDIDO.", - about_label: "// ARQUITECTURA", - about_title: "El Núcleo de Ingeniería", + about_label: "Núcleo de Ingeniería", + about_title: "Excelencia GIS & Arquitectura de Software", about_p1: "Cerramos la brecha entre el mapeo SIG tradicional y la arquitectura de software escalable. Enfocados en flujos de datos de alta precisión y automatización técnica.", proj_label: "// DESPLIEGUES_TÉCNICOS", cv_header: "CV BIO-NEURONAL // ALBERT GAVIRIA V5.2", @@ -131,8 +152,8 @@ const translations = { form_email: "Correo_Uplink", form_msg: "Breve_Misión", form_btn: "INICIAR_PROTOCOLO", - metrics_label: "// RENDIMIENTO_PROBADO", - metrics_title: "Núcleo de Impacto Neural", + metrics_label: "Rendimiento Comprobado", + metrics_title: "Métricas de Impacto", metric_roi: "Eficiencia Operativa", metric_error: "Reducción de Margen de Error", metric_team: "Personal Gestionado", @@ -159,9 +180,9 @@ const translations = { map_label: "// ESTACIÓN_SIG_INTEL_V6", map_title: "Hub de Inteligencia Territorial", map_desc: "Estación GIS full-stack — datos prediales interactivos, capas configurables, inteligencia de coordenadas en vivo. Haga clic en cualquier nodo para inspeccionar atributos LADM-COL.", - timeline_label: "// NODOS_TEMPORALES", + timeline_label: "Trayectoria Profesional", timeline_title: "Trayectoria Profesional", - timeline_desc: "Más de 6 años de ingeniería GIS especializada y territorial en los proyectos catastrales más complejos de Colombia.", + timeline_desc: "+6 años de ingeniería GIS especializada y territorial en los proyectos catastrales más complejos de Colombia.", tl_role_1: "Analista Geográfico", tl_desc_1: "Procesamiento avanzado de información geoespacial mediante fotointerpretación, análisis de insumos 360° y restitución vectorial. Digitalización masiva para actualización catastral con validación topológica bajo LADM-COL V3.", tl_role_2: "Control de Calidad Catastral (QA/QC)", @@ -174,9 +195,9 @@ const translations = { tl_desc_5: "Levantamientos fotogrametrícos con drones, gestión predial y producción de GIS vectorial. Análisis espacial y salidas cartográficas para proyectos inmobiliarios y ambientales.", tl_role_6: "Digitalizador / Topógrafo", tl_desc_6: "Levantamientos topográficos viales y digitalización GIS para proyectos de infraestructura. Trabajo de campo GPS y producción cartográfica bajo estándares profesionales.", - contact_title: "INICIAR\nCONEXIÓN.", - contact_desc: "Arquitectando la próxima generación de inteligencia espacial. Hablemos de su proyecto, municipio o desafío GIS empresarial.", - contact_label: "// INICIO_PROYECTO", + contact_title: "Hablemos de su
próximo proyecto.", + contact_desc: "Inteligencia territorial de próxima generación. Hablemos de su proyecto, municipio o desafío GIS empresarial.", + contact_label: "Iniciar Proyecto", contact_email_label: "Correo", contact_loc_label: "Ubicación", form_service: "Tipo_Servicio", @@ -187,7 +208,22 @@ const translations = { form_svc_4: "GeoAI / Análisis Satelital", form_svc_5: "Proyecto Gubernamental / Municipal", form_svc_6: "Otro / Consultoría", - cv_download: "Descargar Datasheet (PDF)" + cv_download: "Descargar Datasheet (PDF)", + hero_available: "Disponible para Proyectos · Medellín, Colombia", + cap_tag: "Servicios Especializados", + cap_title: "Soluciones especializadas para cada desafío territorial", + svc_1_title: "Actualización Catastral Masiva", + svc_1_desc: "Gestión y validación de +500 predios por ciclo. Automatización completa del flujo de reconocimiento predial bajo estándares IGAC y LADM-COL V3.", + svc_2_title: "QA/QC Catastral (Control de Calidad)", + svc_2_desc: "Auditorías técnicas con Python y QGIS. Detección automática de solapamientos, huecos y errores topológicos. Reportes PDF certificados para interventoría.", + svc_3_title: "Pipelines ETL Geoespaciales", + svc_3_desc: "Arquitectura de transformación desatendida para ingesta masiva de datos GIS. De campo a base de datos sin intervención manual.", + svc_4_title: "GeoAI — Detección de Cambios", + svc_4_desc: "Machine Learning sobre imagenería Sentinel-2 para detectar expansión urbana, cambios de cobertura y mutaciones territoriales multitemporales.", + svc_5_title: "Dashboards GIS Empresariales", + svc_5_desc: "Interfaces interactivas conectadas a PostGIS para análisis territorial en tiempo real. Visualización de indicadores catastrales para directivos y entidades.", + svc_6_title: "Coordinación de Campo GIS", + svc_6_desc: "Liderazgo de equipos técnicos en terrenos complejos. Nivelación topográfica, digitalización masiva y coordinación bajo normativa NSR-10." } }; @@ -312,7 +348,7 @@ document.addEventListener('DOMContentLoaded', () => { if (el.tagName === 'INPUT' || el.tagName === 'TEXTAREA') { el.placeholder = translations[lang][key]; } else { - el.textContent = translations[lang][key]; + el.innerHTML = translations[lang][key]; } } }); diff --git a/assets/js/vision/index.js b/assets/js/vision/index.js index dc98a72..a3c26b3 100644 --- a/assets/js/vision/index.js +++ b/assets/js/vision/index.js @@ -5,10 +5,23 @@ import { initMap, changeBasemap, toggleIGAC, toggleCadastre, togglePowerLayer } import { togglePanel, initMouseDrag, onFeatureHover } from './ui-controller.js'; import { initVision, initVisionElements, getCurrentGestureState } from './gestures.js'; -// 1. Mapbox Configuration -const MAPBOX_TOKEN = 'YOUR_MAPBOX_TOKEN_HERE'; +// 1. Mapbox Configuration - Fetching from backend for security +async function getMapboxToken() { + try { + // En Vercel, el backend suele estar en /api o la ruta configurada + const response = await fetch('/api/config/mapbox-token'); + if(!response.ok) throw new Error('Backend config error'); + const data = await response.json(); + return data.token; + } catch (e) { + console.warn("Security policy: no local fallback allowed."); + return ''; + } +} -document.addEventListener('DOMContentLoaded', () => { +document.addEventListener('DOMContentLoaded', async () => { + const MAPBOX_TOKEN = await getMapboxToken(); + // 2. Initialize Map const map = initMap(MAPBOX_TOKEN); diff --git a/backend/fastapi/app/main.py b/backend/fastapi/app/main.py index 54b40c3..7370146 100644 --- a/backend/fastapi/app/main.py +++ b/backend/fastapi/app/main.py @@ -1,3 +1,4 @@ +import os from fastapi import FastAPI, HTTPException from fastapi.middleware.cors import CORSMiddleware from .schemas import ValidationRequest, GeoJSONFeature @@ -60,3 +61,11 @@ async def detect_geospatial_changes(): "capabilities": ["Sentinel-2", "Random Forest", "NDVI_Analysis"], "message": "Node require high-performance raster kernel." } +@app.get("/config/mapbox-token", tags=["System"]) +async def get_mapbox_token(): + """Returns the Mapbox token from environment variables.""" + token = os.getenv("MAPBOX_TOKEN") + if not token: + # Fallback vacío para evitar bloqueo de seguridad de GitHub + return {"token": ""} + return {"token": token} diff --git a/index.html b/index.html index 42865df..c20e122 100644 --- a/index.html +++ b/index.html @@ -13,18 +13,21 @@ - - + + + + + + content="GIS Colombia, Catastro Multipropósito, LADM-COL, automatización GIS, PostGIS, QGIS, ArcGIS Pro, Python GIS, inteligencia territorial, Medellín, GovTech, interventoría catastral"> @@ -71,36 +74,44 @@
+ @@ -283,16 +381,16 @@

-
Academic_History
+
Academic_Trajectory
-
Tecnólogo Gestión Nat.
-
SENA // - 2021
+
Ingeniería de Sistemas (C)
+
EAFIT // 2024 - + ACTUAL
-
Técnico Conservación
+
Técnico Conservación
SENA // 2019
@@ -333,12 +431,12 @@

Current_Mission
-
Analista +
Analista Geográfico
GEOGRAFÍA SATELITAL GEOSAT S.A.S. // 09/2025 - 12/2025
-

+

Procesamiento avanzado de información geoespacial mediante fotointerpretación, análisis de insumos 360° y restitución vectorial. Digitalización masiva para actualización catastral con validación topológica. @@ -351,24 +449,24 @@

Professional_Log

-
Coordinador Profesional SIG
+
Coordinador Profesional SIG
UT SMART CITY // 2024 - ACTUAL
-
Control Calidad Catastral (QA/QC) +
Control Calidad Catastral (QA/QC)
ACCION DEL CAUCA // 2025
-
Coordinador & Líder de Campo +
Coordinador & Líder de Campo
ARBIRTRIUM S.A.S // 2024 - 2025
-
Profesional SIG (Drones & GIS) +
Profesional SIG (Drones & GIS)
GRUPO EMPRESARIAL OD // 2023 - 2024
@@ -383,28 +481,28 @@

-
Sistemas de Gestión
+
Sistemas de Gestión
2023

🌍
-
Instituto Ambiental GIS
+
Instituto Ambiental SIG
2023
-
Diseño Eléctrico
+
Diseño Eléctrico
Electrical Training
👷
-
Seguridad y Salud
+
Seguridad y Salud
SG-SST 2023
@@ -419,431 +517,398 @@

- -
-
- - -

ALBERT GAVIRIA ZAPATA

-

- Automating Multipurpose Cadastre &
Territorial Intelligence Systems + +

+
+
+ + Disponible para Proyectos · Medellín, Colombia +
+ +

+ Automatizando Sistemas de Inteligencia Territorial y Catastro Multipropósito +

+ +

+ Ingeniería geoespacial de alto rendimiento y arquitectura de sistemas automatizados.

-

High-Performance Geospatial Engineering & Automated Systems for Multipurpose Cadastre, GovTech, and Territorial Intelligence.

-
- ⭐ Explore Spatial Lab - Link_Direct - 🗺️ Intel Map + + + +
+
+ 50k+ + Spatial Nodes +
+
+
+ 0.02s + Query Latency +
+
+
+ LADM-COL + V3 Protocol +
-
- Python - PostGIS - GIS Automation - LADM-COL - FastAPI - MapLibre GL - PyQGIS - GeoPandas +
+ + +
+
+
+ PostGIS Core +
+ +
+
Python
+
FastAPI
+
+ +
+
LADM
+
Docker
+
GeoAI
+
+ +
+
QGIS
+
React
+
Node.js
+
-
-
+
- -

Competencias

-
-
-
📊
-

Actualización Catastral

-
115%
-

Gestión de 500+ - predios en 4 meses. Reducción de errores del 30% mediante automatización.

-
- ArcGIS Pro LADM-COL -
-
-
-
💾
-

Validación Geográfica

-
QA/QC
-

Control experto de - bases de datos PostgreSQL/PostGIS garantizando integridad LADM-COL.

-
- QGIS Python -
-
-
-
🏗️
-

Supervisión de Obra

-
15+ Lead
-

Coordinación - técnica de infraestructura y gestión de informes bajo normativa NSR-10.

-
- AutoCAD MS Project -
+
Servicios Especializados
+

Soluciones especializadas para cada desafío territorial

+ +
+
+
🗺️
+

Actualización Catastral Masiva

+

Gestión y validación de +500 predios por ciclo. Automatización completa del flujo de reconocimiento predial bajo estándares IGAC y LADM-COL V3.

+
ArcGIS ProLADM-COLPostGIS
+
+
+
⚙️
+

QA/QC Catastral (Control de Calidad)

+

Auditorías técnicas con Python y QGIS. Detección automática de solapamientos, huecos y errores topológicos. Reportes PDF certificados para interventoría.

+
PythonShapelyQGIS
+
+
+
🔄
+

Pipelines ETL Geoespaciales

+

Arquitectura de transformación desatendida para ingesta masiva de datos GIS. De campo a base de datos sin intervención manual.

+
GDAL/OGRFMEFastAPI
+
+
+
🛰️
+

GeoAI — Detección de Cambios

+

Machine Learning sobre imagenería Sentinel-2 para detectar expansión urbana, cambios de cobertura y mutaciones territoriales multitemporales.

+
Scikit-learnRasterioSentinel-2
+
+
+
📊
+

Dashboards GIS Empresariales

+

Interfaces interactivas conectadas a PostGIS para análisis territorial en tiempo real. Visualización de indicadores catastrales para directivos y entidades.

+
MapLibre GLReactREST API
+
+
+
🏗️
+

Coordinación de Campo GIS

+

Liderazgo de equipos técnicos en terrenos complejos. Nivelación topográfica, digitalización masiva y coordinación bajo normativa NSR-10.

+
GPS RTKCivil 3DAutoCAD
- +
- -

Problemas que Resuelvo

-

- Los gobiernos y entidades territoriales enfrentan desafíos críticos de datos espaciales que frenan - el desarrollo. Estos son los problemas que transformo en soluciones automatizadas.

- -
-
-
⚠️   Territorial Problem
-
✅   DGZ Engineering Solution
-
-
-
🔴QA manual lento y propenso a errores
-
>>>Validación automática con Python + PostGIS
-
-
-
🔴Datos prediales inconsistentes
-
>>>Modelado espacial LADM-COL estructurado
-
-
-
🔴Procesos de digitalización lentos
-
>>>Pipelines SIG automatizados con GDAL/FME
-
-
-
🔴Topología catastral rota
-
>>>Motores de validación topológica automatizados
-
-
-
🔴Sin visibilidad sobre cambios de cobertura
-
>>>GeoAI: detección cambios con Sentinel + Rasterio
+
Desafíos Territoriales
+

Lo que resuelvo para su entidad

+

Los gobiernos y entidades territoriales enfrentan desafíos críticos de datos espaciales. Estos son los problemas que transformo en soluciones automatizadas.

+ +
+
+
01
+

QA manual lento y propenso a errores

+
+

Validación automática con Python + PostGIS

+
+
+
+
02
+

Datos prediales inconsistentes

+
+

Modelado espacial LADM-COL estructurado

+
+
+
+
03
+

Procesos de digitalización lentos

+
+

Pipelines SIG automatizados con GDAL/FME

+
+
+
+
04
+

Topología catastral rota

+
+

Motores de validación topológica automatizados

+
+
+
+
05
+

Sin visibilidad sobre cambios de cobertura

+
+

GeoAI: detección cambios con Sentinel + Rasterio

+
- +
-
+
+
- -

The Engineering Core

-
- -
-
-

- GIS EXCELLENCE &
SOFTWARE ARCHITECTURE.

-

+

Núcleo de Ingeniería
+ +
+
+

Excelencia GIS & Arquitectura de Software

+

Cerramos la brecha entre el mapeo SIG tradicional y la arquitectura de software escalable. Enfocados en flujos de datos de alta precisión y automatización técnica. Nuestro núcleo transforma datos geográficos sin estructura en inteligencia territorial automatizada y soberana.

- +
+
Arquitectura LADM-COL Autónoma
+
Pipeline ETL Desatendido
+
Nodos Geospaciales en la Nube
+
-
- -
- - - - SYS_TERMINAL_V5.5 -
-
- [sys_init] DGZ_ENGINEERING_V5.5 SOVEREIGN_EDITION
-
- [status] CORE_MATRIX: ACTIVE +
+
+
+ Spatial Data Engineering + 95%
-
- [status] SPATIAL_QUERY_ENGINE: READY +
+
+
+
+ Python Automatization (GDAL/PyQGIS) + 90%
-
- [telemetry] MANUAL_LATENCY: 0.00ms +
+
+
+
+ PostGIS & Database Architecture + 85%
-
- [inference] URBAN_CHANGE_NODE: DETECTED +
+
+
+
+ Frontend Web GIS (React/MapLibre) + 80%
+
- -
-
-
- -
+ +
- -

The Engineering Matrix

-
-
-
- -

GIS_SPATIAL_INFRA

+
Stack Tecnológico
+

La Matriz de Ingeniería

+ +
+ +
+
+
+

GIS_SPATIAL_INFRA

+

PostGIS topology matrix, LADM-COL V3 compliance, and high-fidelity vector architecture.

+
+
+ PostGIS PostGIS + QGIS QGIS + Supabase Supabase + LADM-COL +
+
+ +
+
+
+

SYSTEMS_ENGINEERING

+

Backend systems for spatial intelligence, RESTful Node architecture, and automated logic.

+
+
+ Python Python + FastAPI FastAPI + TypeScript TypeScript + Node.js Node.js + React & Vite React + Vite +
+
+ +
+
+
+

AUTOMATION_DEVOPS

+

CI/CD pipelines for geospatial assets, containerized spatial nodes, and kernel optimization.

+
+
+ Actions GitHub Actions + Docker Docker + Vercel Vercel + Ubuntu Ubuntu / Linux
-
- GIS Tech -
-

PostGIS topology matrix, LADM-COL V3 compliance, and high-fidelity vector architecture.

-
-
-
- -

SYSTEMS_ENGINEERING

-
-
- Language Stack -
-

Backend systems for spatial intelligence, RESTful Node architecture, and automated logic.

-
-
-
- -

AUTOMATION_DEVOPS

-
-
- DevOps Tech -
-

CI/CD pipelines for geospatial assets, containerized spatial nodes, and kernel optimization.

- -
+ +
- -

Impact Metrics

- -
-
-
🗺️
-
-
50k+
-
Spatial Records Processed
-
Predios // Cartografía // Catastro
-
-
-
-
-
-
-30%
-
Error Margin Reduction
-
QA/QC // Automation // Validation
-
-
-
-
🏗️
-
-
115%
-
Operational Efficiency
-
ROI // Automatización Workflow
-
-
-
-
🌎
-
-
6+
-
Multipurpose Cadastre Projects
-
LADM-COL // IGAC Standards
-
+
Rendimiento Comprobado
+

Métricas de Impacto

+ +
+
+
50k+
+
Spatial Records Processed
+
Parcels // Cartography // Cadastre
+
+
+
+
-30%
+
Error Margin Reduction
+
QA/QC // Automation // Validation
+
+
+
+
115%
+
Operational Efficiency
+
ROI // Workflow Automation
+
+
+
+
6+
+
Multipurpose Cadastre Projects
+
LADM-COL // IGAC Standards
+
- -
-
-
DGZ SPATIAL LAB // LIVE_DEMOS
-

Spatial Intelligence Laboratory

-

- High-performance geospatial nodes demonstrating territorial automation, - multipurpose cadastral engines, and advanced spatial systems engineering. - The following assets are active demonstrations, not mere placeholders. -

-
- - Prueba Gratuita: Vision Sandbox (IA Gestual) - + +
+
+
+
+ DGZ SPATIAL LAB · LIVE DEMOS +
+

Spatial Intelligence Laboratory

+

+ High-performance geospatial nodes demonstrating territorial automation, multipurpose cadastral engines, and advanced spatial systems engineering. +

+
-
-
- -
-
SYSTEM_ID: SPATIAL_VIEWER_01
-
🗺️
-

Interactive Spatial Node

-

High-precision GIS viewer with real-time parcel rendering. Interacting with local datasets exported from QGIS with deep topology attributes.

-
- MapLibre GL - Vector Tiles - PostGIS -
-
-
-
-
-
-
-
-
DATA_SOURCE: SOVEREIGN_LOCAL
-
-
- - - EXECUTE_VIEWER - -
+
+ +
+
+
SYS_NODE: VIEWER_01
+
+
+

Interactive Spatial Node

+

High-precision GIS viewer with real-time parcel rendering. Interacting with local datasets exported from QGIS with deep topology attributes.

+
+ MapLibre GLVector TilesPostGIS +
+
+
+
+
+
+ + Launch Viewer + +
- -
-
SYSTEM_ID: CADASTRAL_VALIDATOR_02
-
⚙️
-

Cadastral Intelligence Engine

-

Automated LADM-COL V3 validator. Detecting overlaps, slivers, and topological inconsistencies using high-fidelity Python kernels.

-
- FastAPI - Shapely - Topology -
-
-
-
-
-
-
-
-
-
- - - - ACCESS_FULL_INTERFACE - -
-
- -
-
SYSTEM_ID: AUTOMATION_PIPELINE_03
-
🔄
-

Sovereign Data Pipelines

-

Unattended ETL architecture for massive geospatial ingestion, ensuring data integrity across distributed GIS nodes.

-
- GDAL/OGR - ETL - CI/CD -
-
-
-
- 📡 Field Survey - -
-
-
- 🔄 ETL_TRANSFORM - -
-
-
- 🔍 QA_QC_VALIDATION - -
-
-
- 🗄️ GEODB_SYNC - + +
+
+
SYS_NODE: PIPELINE_03
+
+
+

Sovereign Data Pipelines

+

Unattended ETL architecture for massive geospatial ingestion, ensuring data integrity across distributed GIS nodes.

+
+ GDAL/OGRETLCI/CD +
+
+
+ Survey +
+
+ Transform +
+
+ GeoDB Sync +
@@ -853,8 +918,8 @@

Sovereign Data Pipelines

- -

Despliegues Técnicos

+
Proyectos Destacados
+

Despliegues Técnicos

@@ -876,7 +941,7 @@

Despliegues Técnicos

class="tech-tag">REST_API
Explore Ecosystem + style="width: fit-content;" data-i18n="lab_p2_full">Explore Ecosystem
@@ -897,10 +962,10 @@

Despliegues Técnicos

SYSTEM_ID: AUTOMATION_01
-

Territorial ETL Pipelines +

Territorial ETL Pipelines

+ style="color:var(--text-secondary); font-size: 0.9rem; margin-bottom: 1.5rem; flex-grow:1; line-height:1.6;" data-i18n="proj_2_desc"> Automatización desatendida de transformaciones de datos geográficos para evitar procesos manuales lentos.

@@ -915,10 +980,10 @@

Territorial ETL
SYSTEM_ID: DASHBOARD_02
-

Enterprise GIS Dashboard +

Enterprise GIS Dashboard

+ style="color:var(--text-secondary); font-size: 0.9rem; margin-bottom: 1.5rem; flex-grow:1; line-height:1.6;" data-i18n="proj_3_desc"> Interfaz interactiva conectada a PostGIS para análisis territorial y telemetría en tiempo real.

@@ -933,9 +998,9 @@

Enterprise GIS D
SYSTEM_ID: PYQGIS_PLUGIN_03
-

LADM-COL QGIS Plugin

+

LADM-COL QGIS Plugin

+ style="color:var(--text-secondary); font-size: 0.9rem; margin-bottom: 1.5rem; flex-grow:1; line-height:1.6;" data-i18n="proj_4_desc"> Scripts insertados como herramientas nativas en la UI de QGIS para validación topológica instantánea.

@@ -951,9 +1016,9 @@

LADM-COL QGIS Pl
SYSTEM_ID: GEO_AI_NODE_01
-

GeoAI Experimental

+

GeoAI Experimental

+ style="color:var(--text-secondary); font-size: 0.9rem; margin-bottom: 1.5rem; flex-grow:1; line-height:1.6;" data-i18n="proj_5_desc"> Laboratorio de prototipos de Machine Learning sobre Sentinel-2 para detección de cambios urbanos.

@@ -971,9 +1036,9 @@

GeoAI Experiment
[NEW] SYSTEM_ID: GEO_LLM_05
-

Geo-LLM Intelligence

+

Geo-LLM Intelligence

+ style="color:var(--text-secondary); font-size: 1rem; margin-bottom: 1.5rem; max-width: 600px; line-height:1.6;" data-i18n="proj_6_desc"> Agente de Inteligencia Artificial que traduce lenguaje natural a sentencias Spatial SQL y genera informes catastrales estadísticos en tiempo real.

Geo-LLM Intellig
- -

GeoAI Intelligence

+ +

GeoAI Intelligence

-

Urban Change Detection
con Imagenería Sentinel

-

Aplicación de inteligencia artificial geoespacial para detectar cambios de cobertura urbana y +

Urban Change Detection
con Imagenería Sentinel

+

Aplicación de inteligencia artificial geoespacial para detectar cambios de cobertura urbana y rural comparando imágenes satelitales multitemporales. Pipeline completo desde descarga Sentinel hasta exportación GeoJSON.

@@ -1005,29 +1070,29 @@

Urban Change Detection
con Imagenería Sentinel

01
-

Descarga Imagenería Sentinel-2

-

API Copernicus // Bandas espectrales B04, B08, B11 // resolución 10m

+

Descarga Imagenería Sentinel-2

+

API Copernicus // Bandas espectrales B04, B08, B11 // resolución 10m

02
-

Procesamiento con Rasterio + GeoPandas

-

Normalización radiométrica // Comparación multitemporal // NumPy arrays

+

Procesamiento con Rasterio + GeoPandas

+

Normalización radiométrica // Comparación multitemporal // NumPy arrays

03
-

Clasificación con Scikit-learn

-

Random Forest // Detección de cambios // Vectorización de polígonos

+

Clasificación con Scikit-learn

+

Random Forest // Detección de cambios // Vectorización de polígonos

04
-

Exportación GeoJSON → Web GIS

-

Visualización en MapLibre GL // Integración PostGIS // API REST

+

Exportación GeoJSON → Web GIS

+

Visualización en MapLibre GL // Integración PostGIS // API REST

@@ -1043,7 +1108,7 @@

Exportación GeoJSON → Web GIS

-
[CHANGE_DETECTION_ENGINE] STATUS: PROCESSING_DEMO
+
[CHANGE_DETECTION_ENGINE] STATUS: PROCESSING_DEMO
@@ -1077,9 +1142,9 @@

Exportación GeoJSON → Web GIS

- -

System Architecture

-

System Architecture

+

La arquitectura completa del stack GovTech moderno — desde captura en campo hasta publicación web. Diseñado para escalar a nivel de municipio, departamento o nación.

@@ -1088,40 +1153,40 @@

System Architecture

📡
-
Field Capture
-
GPS // 360° Survey // Drones
+
Field Capture
+
GPS // 360° Survey // Drones
ETL Process
🐍
-
Python Validation Engine
-
GDAL // GeoPandas // Shapely
+
Python Validation Engine
+
GDAL // GeoPandas // Shapely
Topology Check
🗄️
-
PostGIS Database
-
PostgreSQL // LADM-COL Schema
+
PostGIS Database
+
PostgreSQL // LADM-COL Schema
REST API
-
FastAPI Spatial API
-
/validate // /topology // /parcel_score
+
FastAPI Spatial API
+
/validate // /topology // /parcel_score
GeoJSON Tiles
🌐
-
Web GIS Interface
-
MapLibre GL JS // Chart.js
+
Web GIS Interface
+
MapLibre GL JS // Chart.js
@@ -1151,117 +1216,103 @@

System Architecture

- -
+ +
- -

Professional Trajectory

-

6+ years of specialized GIS and territorial engineering across Colombia's most complex cadastral projects.

- -
- -
-
-
-
-
2025 Q4 — Actual
-
● ACTIVE
-
-

Analista Geográfico

-
GEOGRAFÍA SATELITAL GEOSAT S.A.S. // Medellín
-

Advanced geospatial processing via photointerpretation, 360° input analysis and vector restitution. Mass digitization for cadastral update with topological validation under LADM-COL V3.

-
LADM-COL V3360° SurveyPyQGISRestitución
+ +
+ +
+
Trayectoria Profesional
+

Experiencia
Técnica

+

+ +6 años de ingeniería GIS especializada y territorial en los proyectos catastrales y de infraestructura más complejos de Colombia. +

+
+
PROFILE: SENIOR GIS ENGINEER
+
STATUS: DISPONIBLE
-
-
-
-
-
2025 Q2
-
✓ COMPLETADO
-
-

Control de Calidad Catastral (QA/QC)

-
ACCION DEL CAUCA S.A.S. // Solita, Caquetá
-

Technical quality audits for predial recognition. IGAC QA standards enforcement, topological consistency checks, and field report validation.

-
IGAC QAAuditTopologíaPredios
+ +
+ +
+
2025 Q4 — Actual
+

Analista Geográfico

+
GEOGRAFÍA SATELITAL GEOSAT S.A.S. // Medellín
+

Advanced geospatial processing via photointerpretation, 360° input analysis and vector restitution. Mass digitization for cadastral update with topological validation under LADM-COL V3.

+
LADM-COL V3360° SurveyPyQGISRestitución
-
-
-
-
-
-
2024 — 2025
-
✓ COMPLETADO
-
-

Coordinador & Líder de Campo SIG

-
ARBIRTRIUM S.A.S. // Urabá, Antioquia
-

Co-leader of field survey teams. Urban/rural topographic leveling, mass digitization, and GIS coordination for multipurpose cadastre missions in complex terrain.

-
Barrido PredialCoordinaciónQGISGPS RTK
+
+
2025 Q2
+

Control de Calidad Catastral (QA/QC)

+
ACCION DEL CAUCA S.A.S. // Solita, Caquetá
+

Technical quality audits for predial recognition. IGAC QA standards enforcement, topological consistency checks, and field report validation.

+
IGAC QAAuditTopología
-
-
-
-
-
-
2024 — Actual
-
● ACTIVE
-
-

Coordinador Profesional SIG

-
UT SMART CITY // Barranquilla
-

LADM-COL standards implementation, mass digitization and GIS coordination for Smart City multipurpose cadastre deployment. QA/QC across distributed field nodes.

-
LADM-COLSmart CityMass Digit.QA/QC
+
+
2024 — 2025
+

Coordinador & Líder de Campo SIG

+
ARBIRTRIUM S.A.S. // Urabá, Antioquia
+

Co-leader of field survey teams. Urban/rural topographic leveling, mass digitization, and GIS coordination for multipurpose cadastre missions in complex terrain.

+
Barrido PredialCoordinaciónQGISGPS RTK
-
-
-
-
-
-
2023 — 2024
-
✓ COMPLETADO
-
-

Profesional SIG — Drones & Geomática

-
GRUPO EMPRESARIAL OD // Colombia
-

Drone-based photogrammetric surveys, predial management, and vector GIS production. Spatial analysis and cartographic outputs for real estate and environmental projects.

-
DronesFotogrametríaPostGISGeoPandas
+
+
2024 — Actual
+

Coordinador Profesional SIG

+
UT SMART CITY // Barranquilla
+

LADM-COL standards implementation, mass digitization and GIS coordination for Smart City multipurpose cadastre deployment. QA/QC across distributed field nodes.

+
LADM-COLSmart CityMass Digit.
-
-
-
-
-
-
2021 — 2022
-
✓ COMPLETADO
-
-

Digitalizador / Topógrafo

-
CINTELI / Consorcio Vial BAQ // Barranquilla
-

Road topography surveys and GIS digitization for vial infrastructure projects. GPS field surveys and cartographic production under professional standards.

-
TopografíaGPSAutoCADInfraestructura
+
+
2023 — 2024
+

Profesional SIG — Drones & Geomática

+
GRUPO EMPRESARIAL OD // Colombia
+

Drone-based photogrammetric surveys, predial management, and vector GIS production. Spatial analysis and cartographic outputs for real estate and environmental projects.

+
DronesFotogrametríaPostGIS
+
+ +
+
2021 — 2022
+

Digitalizador / Topógrafo

+
CINTELI / Consorcio Vial BAQ // Barranquilla
+

Road topography surveys and GIS digitization for vial infrastructure projects. GPS field surveys and cartographic production under professional standards.

+
TopografíaGPSAutoCAD
-
+
-
- - -
- -

Territorial Intelligence Hub

-

+

+
+
SPATIAL_INTEL_WORKSTATION_V6
+

Territorial Intelligence Hub

+

Full-stack GIS workstation — interactive parcel data, toggleable layers, real coordinate intelligence, and live attribute queries. Click any map node to inspect its LADM-COL attributes.

+
+
STATUS: OFFLINE // AWAITING INIT
+ + +
+
+
- -
+ +
+
+
@@ -1304,7 +1355,7 @@

Territorial Intelligence Hub

- LAYER_MANAGER + LAYER_MANAGER
@@ -1327,7 +1378,7 @@

Territorial Intelligence Hub

-
VECTOR_LAYERS
+
VECTOR_LAYERS
@@ -1337,40 +1388,47 @@

Territorial Intelligence Hub

Professional Nodes
- +
- +
- +
- +
@@ -1378,33 +1436,36 @@

Territorial Intelligence Hub

-
LEGEND
-
+
LEGEND
+
- HQ Node — Active + HQ Node — Active
- Field Node — Remote + Field Node — Remote
-
- Parcel — Validated +
+ Parcel — Validated
-
- Parcel — Error +
+ Parcel — Error
-
- Urban Perimeter +
+ Urban Perimeter
-
PROJECTION INFO
+
PROJECTION INFO
CRS: WGS84 EPSG:4326
Units: Decimal Degrees
@@ -1433,14 +1494,17 @@

Territorial Intelligence Hub

- @@ -1454,7 +1518,8 @@

Territorial Intelligence Hub

🗺️
-
+
SELECT A FEATURE
TO DISPLAY
ATTRIBUTES
@@ -1499,7 +1564,8 @@

Territorial Intelligence Hub

- ● SYSTEM READY — DGZ Territorial Intelligence v6.0 + ● SYSTEM READY — DGZ Territorial Intelligence + v6.0 | Features: 6 | @@ -1509,8 +1575,43 @@

Territorial Intelligence Hub

| 00:00:00
+
+
+ + + -
@@ -1523,36 +1624,39 @@

Territorial Intelligence Hub

-
+
ACQUIRE_OS_LICENSE // SPATIAL_INTELLIGENCE_SaaS
-

DGZ SPATIAL OS

-

+

DGZ SPATIAL + OS

+

El motor de validación catastral más avanzado de Latinoamérica.
- Diseñado para interventorías, municipios y contratistas GIS que necesitan resultados reales. + Diseñado para interventorías, municipios y contratistas GIS + que necesitan resultados reales.

800%
-
Faster than manual QA
+
Faster than manual QA
-
-30%
-
Error Reduction
+
-30%
+
Error Reduction
-
50k+
-
Parcels Processed
+
50k+
+
Parcels Processed
LADM-COL
-
V3 Certified Engine
+
V3 Certified Engine
@@ -1561,27 +1665,27 @@

DGZ
- ⛔ WITHOUT DGZ SPATIAL OS + ⛔ WITHOUT DGZ SPATIAL OS
    -
  • QA manual: 3-5 días por municipio
  • -
  • Errores topológicos sin detectar
  • -
  • Cruce SNR manual (propenso a error)
  • -
  • Reportes en Excel sin estándar
  • -
  • Sin trazabilidad sobre mutaciones
  • +
  • QA manual: 3-5 días por municipio
  • +
  • Errores topológicos sin detectar
  • +
  • Cruce SNR manual (propenso a error)
  • +
  • Reportes en Excel sin estándar
  • +
  • Sin trazabilidad sobre historia de mutaciones
VS
- ✅ WITH DGZ SPATIAL OS + ✅ WITH DGZ SPATIAL OS
    -
  • Validación automática en minutos
  • -
  • Motor topológico LADM-COL V3
  • -
  • Cruce matricular automático con SNR
  • -
  • PDF Técnico y GeoJSON certificado
  • -
  • Trazabilidad completa de cada predio
  • +
  • Validación automática en minutos
  • +
  • Motor topológico LADM-COL V3
  • +
  • Cruce matricular automático SNR
  • +
  • PDF Técnico y GeoJSON certificado
  • +
  • Trazabilidad completa de cada predio

@@ -1591,28 +1695,30 @@

DGZ -
STARTER_CORE
-

Spatial Explorer

-

Para profesionales GIS, técnicos y estudiantes avanzados.

+
STARTER_CORE
+

Spatial Explorer

+

Para profesionales GIS, técnicos y estudiantes avanzados.

- $0 - /mes · Forever Free + $0 + /mes · Forever Free
-
Capacidad: 5,000 predios/capa
-
+
Capacidad: 5,000 predios/capa
+
+
+
    -
  • Validación topológica LADM-COL básica
  • -
  • Hasta 5,000 predios por capa
  • -
  • 5 Reportes Geo-LLM / día
  • -
  • Visor GIS interactivo embebido
  • -
  • Cruce Físico-Jurídico SNR
  • -
  • API FastAPI para automatización
  • -
  • Soporte técnico premium
  • +
  • Validación topológica LADM-COL básica
  • +
  • Hasta 5,000 predios por capa
  • +
  • 5 Reportes Geo-LLM / día
  • +
  • Visor GIS interactivo embebido
  • +
  • Cruce Físico-Jurídico SNR
  • +
  • API FastAPI para automatización
  • +
  • Soporte técnico premium
- Comenzar Gratis + Comenzar Gratis
No credit card · Zero commitment

@@ -1621,58 +1727,71 @@

Spatial Explorer

-
GOVERNMENT_NODE
-

Sovereign Tier

-

Para Oficinas de Catastro, IGAC, Alcaldías y grandes contratistas.

+
GOVERNMENT_NODE
+

Sovereign Tier

+

Para Oficinas de Catastro, IGAC, Alcaldías y grandes contratistas.

- Custom - /proyecto · On-prem available + Custom + /proyecto · On-prem available
-
Enterprise-Scale · Dedicated Infra
-
+
Enterprise-Scale · Dedicated Infra
+
+
+
    -
  • Despliegue On-Premises (servidores propios)
  • -
  • Integración SGDEA / Rentas municipales
  • -
  • Modelos Geo-LLM fine-tuneados al municipio
  • -
  • Capacitación presencial del equipo GIS
  • -
  • SLA técnico garantizado (72h response)
  • -
  • White-label: marca propia del municipio
  • -
  • Acceso full al source code (licencia)
  • +
  • Despliegue On-Premises (servidores propios)
  • +
  • Integración SGDEA / Rentas municipales
  • +
  • Modelos Geo-LLM fine-tuneados al municipio
  • +
  • Capacitación presencial del equipo GIS
  • +
  • SLA técnico garantizado (72h response)
  • +
  • White-label: marca propia del municipio
  • +
  • Acceso full al source code (licencia)
- Contactar Arquitecto + Contactar Arquitecto -
Respuesta en < 24h · NDA disponible
+
Respuesta en < 24h · NDA + disponible
@@ -1711,18 +1830,18 @@

Sovereign Tier

- +
Iniciar Proyecto
-

INITIATE
CONNECTION.

-

Architecting the next generation of spatial intelligence. Let's discuss your project, municipality, or enterprise GIS challenge.

+

Hablemos de su
próximo proyecto.

+

Inteligencia territorial de próxima generación. Hablemos de su proyecto, municipio o desafío GIS empresarial.

-
-
>> COORD: 6.2442°N, 75.5812°W
-
>> PROTOCOL: LADM-COL_V3
-
>> STATUS: AVAILABLE_FOR_PROJECTS
+
+
📍 Medellín, Colombia · 6.2442°N, 75.5812°W
+
📋 Protocolo LADM-COL V3
+
✓ Disponible para proyectos
-
TRANSMISSION_FORM // ENCRYPTED_CHANNEL
+
Formulario de Contacto
- +
- +
@@ -1775,13 +1897,21 @@

INITIATE
CONNECTION.<

- +
-
+
+
@@ -1789,62 +1919,53 @@

INITIATE
CONNECTION.<

- -