EvalCI v1.0.0
LLM quality gates for every PR — zero infra, 2-minute setup, works with any LLM provider.
Features
- Run
@eval_casesuites automatically on every pull request - Block merge if quality drops below threshold
- Post formatted results table as a PR comment (score, cost, latency per case)
- Works with 30+ LLM providers via SynapseKit
- Zero infrastructure — runs entirely in GitHub Actions
Usage
- uses: SynapseKit/evalci@v1
with:
path: tests/evals
threshold: "0.80"
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}What's included
- Apache 2.0 license
- Issue templates (bug report, feature request)
- Discussion template
- PR template
- CONTRIBUTING.md, SECURITY.md, CHANGELOG.md
See README for full documentation.