Skip to content

sneaks/helm_ai

Repository files navigation

SailingGPT

This project uses a simulated NMEA data stream from a sailing voyage to demonstrate the use of LLM assistant in a marine navigation context.


Key Steps and Deliverables

🚀 Project Architecture

This project consists of two main components:

  1. Sailing Simulator: Mimics a typical NMEA 2000 data stream, broadcasting sailing data over WebSockets and UDP.
  2. MCP Server: Connects to the simulator and exposes the data as AI-queryable tools (using Model Context Protocol).
graph LR
    Sim[Sailing Simulator] -- WebSocket --> MCP[MCP Server] -- MCP --> LLM[AI Assistant]
Loading

Key files and layout

  • config.py — Optional config file + env; used by MCP server, UI server, and CLI for URLs and defaults.
  • prompts/sailing_assistant_persona.txt — Naavi system prompt (see "Naavi persona" below).
  • scripts/common.sh — Shared venv, dotenv, and port logic; sourced by tools/open_ui.sh, run_simulation.sh, tools/start_boat.sh, tools/start_bridge.sh.
  • data/processed/*.json — Scenario files (waypoints + conditions). Enriched Nola GPX scenarios live here (e.g. nola_A18D6D0B_enriched.json).

🛠️ Getting Started

1. Prerequisites

pip install -r simulator/requirements.txt
pip install -r mcp_server/requirements.txt

Nola data (optional, for nuance extraction)

Scripts that use Nola race tracks (e.g. scripts/extract_nuance.py) expect the Nola repo as a git submodule at nola/. After cloning, run:

git submodule update --init --recursive

If you clone without --recurse-submodules, run the above once to fetch the Nola data. To add the submodule from a Nola repo URL:

git submodule add <nola-repo-url> nola
git submodule update --init

Override the path with env NOLA_DATA_DIR if your layout differs.

2. Start the Simulator

This starts the "boat" and generates the sensor data stream.

./tools/start_boat.sh

3. Start the MCP Server

In a new terminal, start the bridge that allows an AI to "see" the boat data.

./tools/start_bridge.sh

4. Voyage UI (validate scenario data + chat)

One command (simulator + UI, same scenario):

./run_simulation.sh [scenario_file] [start_waypoint_index]

Starts the simulator in the background and the UI in the foreground; both use the same scenario. On Ctrl+C the simulator is stopped.

Or run them separately: start the simulator with ./tools/start_boat.sh [scenario] [start_index], then in another terminal ./tools/open_ui.sh [scenario]. Open the Map for the voyage view and live strip, or Chat to talk to the model (LM Studio with sailing-gpt MCP must be running). See docs/ui.md for options and details.

Chat from terminal

You can query the same sailing assistant from the CLI (no browser). LM Studio must be running with the model and MCP plugin loaded.

python scripts/chat.py "What's the best heading in 15 kt wind?"
python scripts/chat.py    # interactive; type 'exit' or Ctrl+D to quit

Or: python -m ui chat [message] with the same options (--max-tokens, --temperature). Uses the same config and persona as the web chat.

🧠 Model Integration (Phase 2)

LM Studio Connection

  1. Open LM Studio.
  2. Enable Allow per-request MCPs in the settings.
  3. Add a new MCP server pointing to mcp_server/server.py.
  4. Ask the AI: "What is our current heading and wind speed?"

Naavi persona (system prompt)

The sailing-assistant persona (concise, cardinal headings, no raw JSON) is defined in prompts/sailing_assistant_persona.txt. Override the path with env SAILING_PROMPT_PATH or config key sailing_prompt_path.

  • Voyage UI chat: The UI server loads this file and sends it as system_prompt on every request to LM Studio's /api/v1/chat. Edits to the file take effect on the next message (no restart). See ui/server.py (_load_sailing_prompt, _proxy_chat).
  • Claude Desktop / other MCP clients: The MCP server exposes the same text as an MCP prompt (sailing_assistant_persona). Whether the client uses it as the system message depends on the client; Claude Desktop may use it if configured, or you can paste the persona into the client's custom instructions. The MCP server reads the file at runtime when the prompt is requested.

Configuration (optional)

Endpoints and defaults can be set via environment variables or an optional config file (config.json in the project root). Env always overrides config. Keys: GOFREE_URL, LM_STUDIO_URL, LM_STUDIO_API_TOKEN, LM_STUDIO_MODEL, LM_STUDIO_MCP_PLUGIN_ID, DEFAULT_SCENARIO, SAILING_PROMPT_PATH, CONFIG_PATH. See config.py for the full list.

Other backends (Ollama, OpenAI, etc.)

The Voyage UI chat is wired to LM Studio. To use another LLM backend, run the MCP server (./tools/start_bridge.sh) and point Claude Desktop (or another MCP-capable client) at it; set your model/API in that client. The MCP tools work with any client that supports MCP.

Claude Desktop integration

Add the following to your claude_desktop_config.json (replace <path-to-this-project-root> with the absolute path to this project). Use the project's venv Python so dependencies resolve correctly; set GOFREE_URL if the simulator runs on a different host/port.

{
  "mcpServers": {
    "sailing-gpt": {
      "command": "<path-to-this-project-root>/.venv/bin/python",
      "args": ["-m", "mcp_server.server"],
      "env": {
        "PYTHONPATH": "<path-to-this-project-root>",
        "GOFREE_URL": "ws://127.0.0.1:2053"
      }
    }
  }
}

The simulator must be running (e.g. ./tools/start_boat.sh) so the tools return live data.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors