Local-first log investigation platform: upload compressed log archives, parse and store in Loki, derive metrics for Prometheus, provision Grafana dashboards, index code/docs in PostgreSQL with pgvector, and use an AI agent to produce structured incident reports. Full stack runs via Docker Compose; reports are exportable as Markdown or PDF.
- Docker and Docker Compose
- Python 3.14+ (for local backend development; optional if you only run the stack in Docker)
- LLM API key (OpenAI, OpenRouter, or any OpenAI-compatible endpoint) for report generation
-
Copy environment file and set at least
LLM_API_KEY:cp .env.example .env # Edit .env and set LLM_API_KEY (and optionally LLM_BASE_URL, LLM_MODEL) -
Start the stack (Loki, Prometheus, Grafana, PostgreSQL, backend):
docker compose up -d
-
Create a session and upload logs (see Quickstart for full flow):
curl -X POST http://localhost:8000/sessions -H "Content-Type: application/json" -d '{"name":"my-ticket"}' curl -X POST http://localhost:8000/sessions/SESSION_ID/logs/upload -F "file=@/path/to/logs.zip"
- API docs: http://localhost:8000/docs
- Grafana: http://localhost:3000 (admin/admin)
For detailed steps (knowledge ingest, report generation, export), see Quickstart. Report generation is asynchronous; export returns 409 until the report has content—poll GET /sessions/{id}/reports/{report_id} until content is non-empty, then export.
| Variable | Required | Description |
|---|---|---|
LLM_API_KEY |
Yes (for reports) | API key for OpenAI-compatible LLM |
LLM_BASE_URL |
No | Base URL for LLM API; omit for OpenAI |
LLM_MODEL |
No | Model name (default: gpt-4o-mini) |
EMBEDDING_MODEL |
No | Embedding model (default: text-embedding-3-small) |
LOKI_URL |
No* | Loki URL (default: http://localhost:3100) |
PROMETHEUS_URL |
No* | Prometheus URL (default: http://localhost:9090) |
SESSION_RETENTION_ENABLED |
No | Enable automatic retention cleanup (default: true) |
SESSION_RETENTION_MAX_COUNT |
No | Keep at most this many newest unpinned sessions; 0 or less disables count cleanup (default: 20) |
SESSION_RETENTION_MAX_AGE_DAYS |
No | Delete unpinned sessions older than this many days; 0 or less disables age cleanup (default: 30) |
PROMETHEUS_RETENTION_TIME |
No | Global Prometheus retention window used by Docker Compose (default: 30d) |
KNOWLEDGE_CODE_SOURCES |
No | Comma-separated paths for code ingest |
KNOWLEDGE_DOC_SOURCES |
No | Comma-separated paths for documentation ingest |
KNOWLEDGE_MOUNT_ROOT |
No | Docker: common parent path mounted at /knowledge for code/docs sources |
* Defaults are correct when running the backend on the host against docker compose services; in Docker, the compose file sets URLs to service names.
Copy .env.example to .env and set the values you need.
Useful when developing; infrastructure can still run in Docker:
docker compose up -d loki prometheus grafana postgres
cd backend && uv run fastapi dev app/main.pySet LOKI_URL and PROMETHEUS_URL in .env to http://localhost:3100, etc.
Pinned sessions are excluded from automatic cleanup. Unpinned sessions are cleaned up on backend startup and after successful or partial uploads using the count and age limits above.
From the repository root:
cd backend && pytestContract tests validate API and agent tool schemas (see specs/001-log-investigation-mvp/contracts/).
- Size: Max 500 MB uncompressed, 100 MB compressed per upload.
- Log patterns:
.log,.csv,.json, and optionally.log.*,stdout,stderr. Other files are skipped and counted in the upload result.
- Quickstart — full flow: compose, session, upload, Grafana, knowledge ingest, report generation, export
- Implementation plan — tech stack and project structure
- API contracts — API and agent tool specs