This project uses a simulated NMEA data stream from a sailing voyage to demonstrate the use of LLM assistant in a marine navigation context.
This project consists of two main components:
- Sailing Simulator: Mimics a typical NMEA 2000 data stream, broadcasting sailing data over WebSockets and UDP.
- MCP Server: Connects to the simulator and exposes the data as AI-queryable tools (using Model Context Protocol).
graph LR
Sim[Sailing Simulator] -- WebSocket --> MCP[MCP Server] -- MCP --> LLM[AI Assistant]
config.py— Optional config file + env; used by MCP server, UI server, and CLI for URLs and defaults.prompts/sailing_assistant_persona.txt— Naavi system prompt (see "Naavi persona" below).scripts/common.sh— Shared venv, dotenv, and port logic; sourced bytools/open_ui.sh,run_simulation.sh,tools/start_boat.sh,tools/start_bridge.sh.data/processed/*.json— Scenario files (waypoints + conditions). Enriched Nola GPX scenarios live here (e.g.nola_A18D6D0B_enriched.json).
pip install -r simulator/requirements.txt
pip install -r mcp_server/requirements.txtScripts that use Nola race tracks (e.g. scripts/extract_nuance.py) expect the Nola repo as a git submodule at nola/. After cloning, run:
git submodule update --init --recursiveIf you clone without --recurse-submodules, run the above once to fetch the Nola data. To add the submodule from a Nola repo URL:
git submodule add <nola-repo-url> nola
git submodule update --initOverride the path with env NOLA_DATA_DIR if your layout differs.
This starts the "boat" and generates the sensor data stream.
./tools/start_boat.shIn a new terminal, start the bridge that allows an AI to "see" the boat data.
./tools/start_bridge.shOne command (simulator + UI, same scenario):
./run_simulation.sh [scenario_file] [start_waypoint_index]Starts the simulator in the background and the UI in the foreground; both use the same scenario. On Ctrl+C the simulator is stopped.
Or run them separately: start the simulator with ./tools/start_boat.sh [scenario] [start_index], then in another terminal ./tools/open_ui.sh [scenario]. Open the Map for the voyage view and live strip, or Chat to talk to the model (LM Studio with sailing-gpt MCP must be running). See docs/ui.md for options and details.
You can query the same sailing assistant from the CLI (no browser). LM Studio must be running with the model and MCP plugin loaded.
python scripts/chat.py "What's the best heading in 15 kt wind?"
python scripts/chat.py # interactive; type 'exit' or Ctrl+D to quitOr: python -m ui chat [message] with the same options (--max-tokens, --temperature). Uses the same config and persona as the web chat.
- Open LM Studio.
- Enable Allow per-request MCPs in the settings.
- Add a new MCP server pointing to
mcp_server/server.py. - Ask the AI: "What is our current heading and wind speed?"
The sailing-assistant persona (concise, cardinal headings, no raw JSON) is defined in prompts/sailing_assistant_persona.txt. Override the path with env SAILING_PROMPT_PATH or config key sailing_prompt_path.
- Voyage UI chat: The UI server loads this file and sends it as
system_prompton every request to LM Studio's/api/v1/chat. Edits to the file take effect on the next message (no restart). Seeui/server.py(_load_sailing_prompt,_proxy_chat). - Claude Desktop / other MCP clients: The MCP server exposes the same text as an MCP prompt (
sailing_assistant_persona). Whether the client uses it as the system message depends on the client; Claude Desktop may use it if configured, or you can paste the persona into the client's custom instructions. The MCP server reads the file at runtime when the prompt is requested.
Endpoints and defaults can be set via environment variables or an optional config file (config.json in the project root). Env always overrides config. Keys: GOFREE_URL, LM_STUDIO_URL, LM_STUDIO_API_TOKEN, LM_STUDIO_MODEL, LM_STUDIO_MCP_PLUGIN_ID, DEFAULT_SCENARIO, SAILING_PROMPT_PATH, CONFIG_PATH. See config.py for the full list.
The Voyage UI chat is wired to LM Studio. To use another LLM backend, run the MCP server (./tools/start_bridge.sh) and point Claude Desktop (or another MCP-capable client) at it; set your model/API in that client. The MCP tools work with any client that supports MCP.
Add the following to your claude_desktop_config.json (replace <path-to-this-project-root> with the absolute path to this project). Use the project's venv Python so dependencies resolve correctly; set GOFREE_URL if the simulator runs on a different host/port.
{
"mcpServers": {
"sailing-gpt": {
"command": "<path-to-this-project-root>/.venv/bin/python",
"args": ["-m", "mcp_server.server"],
"env": {
"PYTHONPATH": "<path-to-this-project-root>",
"GOFREE_URL": "ws://127.0.0.1:2053"
}
}
}
}The simulator must be running (e.g. ./tools/start_boat.sh) so the tools return live data.