Two FixedCode commands talk to a language model:
fixedcode draft <kind> "<description>" -o <out.yaml>— turns a prose description into a YAML spec.fixedcode enrich <outputDir>— fills in extension-point business logic in already-generated code.
Both are opt-in. All other commands are deterministic and offline.
In .fixedcode.yaml:
llm:
provider: openrouter # openrouter | openai | ollama
model: google/gemini-2.5-flash
baseUrl: https://openrouter.ai/api/v1 # optional; defaults per provider
apiKeyEnv: OPENROUTER_API_KEYOverride priority: CLI flags > env vars > .fixedcode.yaml.
Env vars:
FIXEDCODE_LLM_PROVIDERFIXEDCODE_LLM_MODELFIXEDCODE_LLM_BASE_URLFIXEDCODE_LLM_API_KEY
The LLM endpoint sees:
- Your spec YAML (for
draft: just the prompt; forenrich: spec values) - The contents of files at extension points and their neighbours (
enrichonly) - The configured API key as Bearer auth
If you don't want this content uploaded, don't run enrich.
The enrich command prints a banner at session start listing the LLM endpoint, the file count, and the trust assumption. Always review changes via git diff before committing.
To prevent crafted .fixedcode.yaml files from exfiltrating your code and credentials to an attacker-controlled endpoint, the engine validates baseUrl against an allowlist of known providers:
openrouter.aiapi.openai.comapi.anthropic.comlocalhost,127.0.0.1,[::1](loopback for local model servers like Ollama)
Plain HTTP is rejected unless the host is loopback. To extend the allowlist for self-hosted endpoints, edit ALLOWED_LLM_HOSTS in engine/src/engine/llm.ts and rebuild.
draft— parses the LLM response as YAML and validates it against the bundle's schema. On validation failure, retries once. If retry fails, the command aborts.enrich— extracts the code block from the LLM response (strips ``` fences) and writes it to the extension-point file verbatim. Lightweight guards: empty response is rejected; oversized response (>512 KB) is rejected; the manifest's hash check reverts unmodified files. AST-based validation is not yet implemented (tracked in #8). The mitigation is the AI-sandwich workflow: extension points are user-owned, every change is reviewed viagit diff, and the engine refuses to enrich on a dirty working tree (override with `--force`).
User-supplied spec values and file contents are inlined into the LLM prompt. A spec value that looks like instructions (e.g. note: "ignore previous instructions and ...") can confuse the model. This is an inherent LLM risk we acknowledge but don't fully mitigate in v0.2.0; tracked in #6.
LLM integration in tests uses a mocked client by default — vitest tests don't make network calls and don't require an API key. Integration tests against a real provider are gated behind an env var.