Skip to content

cpgrant/macshell

Repository files navigation

MacShell – AI-Powered Shell for macOS

MacShell is an experimental, local AI shell assistant for macOS. Type natural language like “list large files here”; MacShell asks your local LLM (via LM Studio or GPT4All) for a single shell command, shows it, and (optionally) runs it with your confirmation.

⚠️ MacShell defaults to macOS/POSIX commands and includes basic safety checks (blocks obviously dangerous commands; asks for extra confirmation for risky ones).

✨ Features • 💬 Interactive REPL (Read–Eval–Print) with one-key execution (Enter = run) • 🧠 Local LLM backends: • LM Studio (OpenAI-compatible local server) • GPT4All (optional; off by default) • 🧯 Safety guardrails (denylist + high-risk double confirm) • 🧹 Output sanitization (removes backticks, keeps first command line) • 🧰 macOS-biased command generation (e.g., prefers ifconfig over ip addr)

📦 Project structure

. ├── archive/ # old copies, experiments (ignored) ├── macshell/ │ ├── init.py │ ├── cli.py # CLI entry (run_cli) │ ├── config.py # engine/model settings + instruction │ ├── model_runner.py # talks to LM Studio/GPT4All │ └── repl.py # interactive shell loop ├── main.py # python -m entry ├── personal_tests/ ├── scripts/ │ └── start_server.sh # starts LM Studio server (port 1234) ├── run_macshell.sh # convenience launcher (starts server + REPL) ├── requirements-macshell.txt ├── setup.py └── README.md

✅ Requirements • macOS (tested on Apple Silicon) • Python 3.11+ • (Recommended) LM Studio installed locally

🔧 Installation

clone & enter

git clone https://github.com/YOUR_USERNAME/macshell.git cd macshell

venv

python3 -m venv .venv source .venv/bin/activate

install deps

pip install -r requirements-macshell.txt

optional: install as a console script

pip install -e .

After pip install -e ., you’ll have a macshell command on your PATH.

⚙️ Configure

Open macshell/config.py:

ENGINE = "lmstudio" # or "gpt4all"

MODEL_NAME = "google/gemma-3-27b" # LM Studio model id API_URL = "http://localhost:1234/v1/chat/completions"

GPT4All only if you switch ENGINE to "gpt4all"

GPT4ALL_MODEL_NAME = "mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF" GPT4ALL_API_URL = "http://localhost:4891/v1/chat/completions"

INSTRUCTION = ( "You are MacShell, running on macOS. Always suggest macOS/POSIX-compatible shell commands. " "Never suggest Linux-only commands like ip addr, apt-get, or lsb_release. " "Prefer ifconfig or networksetup for networking tasks. " "Output exactly ONE command, with no prose, no backticks, and no explanations." )

You can change models from LM Studio’s UI or by editing MODEL_NAME.

🚀 Start the model server (LM Studio)

Option A (recommended): use the included script

./scripts/start_server.sh

internally runs: lms server start --port 1234

Option B: start from LM Studio UI Settings → Server → Start (ensure it listens on http://localhost:1234).

🖥️ Usage

  1. Interactive REPL (default workflow)

Using the convenience wrapper:

./run_macshell.sh

Starts LM Studio server then launches REPL

—or, if you installed as a console script:

macshell --repl

Example session:

MacShell Interactive Shell Model: 'google/gemma-3-27b' Engine: lmstudio Server: http://localhost:1234/v1/chat/completions Type 'exit' or Ctrl-D to quit.

(.venv) user@host macshell % list files MacShell Reply: ls -l

Command suggested: ls -l

Run this command? ([Enter]=yes | n=no | e=edit | run ):

Keys at the prompt: • Enter → run the suggested command • n / no → skip • e / edit → type a different command to run • run / exec → run a custom command directly

MacShell prints stdout, stderr, and the exit code.

🧯 If a command looks risky, MacShell asks for yes explicitly or blocks it.

  1. One-off prompt (non-REPL)

If you installed the console script:

macshell "list files in current directory"

Or via Python:

python main.py "list files in current directory"

This prints the suggested command (no auto-execute in one-off mode).

🧯 Safety behavior • Blocks obviously dangerous commands (e.g., rm -rf /, fork bombs). • High-risk patterns (pipes to sudo, curl | sh, etc.) require typing yes. • You still decide what actually runs (Enter/yes/edit).

You can review/extend these checks in macshell/repl.py (DANGEROUS_* lists).

🧠 Tips & examples • Network info (macOS): • Show Wi-Fi IP: ipconfig getifaddr en0 • List hardware ports: networksetup -listallhardwareports • Disk space: • Human-readable: df -h • Big files: • Top 20 by size: du -sh * | sort -hr | head -n 20 • Find text in project: • grep -R "TODO" .

If the model suggests a Linux-only command (rare with current instruction), edit quickly with e or type run .

🧪 GPT4All (optional)

If you prefer GPT4All: 1. Switch in macshell/config.py:

ENGINE = "gpt4all"

2.	Ensure GPT4ALL_MODEL_NAME and GPT4ALL_API_URL are correct.
3.	Run REPL or one-off as above.

🛠️ Development

install in editable mode

pip install -e .

run REPL

macshell --repl

run a single prompt without installing

python main.py "list files"

Lint/tests (if you add them later):

ruff check . pytest -q

🐞 Troubleshooting • Model replies with Linux ip addr Ensure INSTRUCTION in config.py matches the macOS guidance above. • LM Studio not responding / connection errors Make sure the server is running on http://localhost:1234. From LM Studio: Settings → Server → Start, and check the port. • Websocket-related traceback Use the current code path (LM Studio server HTTP endpoint). If you previously tried LM Studio’s Python SDK websockets directly, prefer the HTTP endpoint configured in API_URL.

📄 License

MIT — see LICENSE (add one if missing).

🙏 Acknowledgements • LM Studio and the open-source LLM community.

This project is experimental and intended for learning, prototyping, and personal workflows on macOS. Use at your own risk.

About

macshell

Resources

Stars

Watchers

Forks

Packages

No packages published