# Clone
git clone https://github.com/2ndSetAI/good-egg.git
cd good-egg
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv syncgood-egg/
├── src/good_egg/
│ ├── action.py # GitHub Action entry point
│ ├── cache.py # SQLite-backed response cache
│ ├── cli.py # Click CLI (good-egg score, cache-stats, cache-clear)
│ ├── config.py # YAML + env var configuration
│ ├── exceptions.py # Custom exception hierarchy
│ ├── formatter.py # Markdown, CLI, JSON, check-run formatters
│ ├── github_client.py # Async GitHub GraphQL/REST client with retry
│ ├── graph_builder.py # Bipartite trust graph construction
│ ├── mcp_server.py # MCP server for AI assistant integration
│ ├── models.py # Pydantic data models
│ └── scorer.py # Graph-based trust scoring engine
├── tests/ # pytest test suite
├── scripts/
│ └── validate_scoring.py # Validation against real repos
├── docs/ # Documentation (library, action, MCP, config)
├── examples/ # Example workflows and config files
└── CHANGELOG.md # Release history
# Run tests
uv run pytest --cov=good_egg -v
# Lint
uv run ruff check src/ tests/ scripts/
# Type check
uv run mypy src/good_egg/
# Format (auto-fix)
uv run ruff check --fix src/ tests/ scripts/
# Verify packaging
uv build- Ruff enforces style (E, F, I, N, W, UP, B, A, SIM rules)
- Line length: 99 characters
- Type annotations required on all function signatures (mypy strict)
- Use
from __future__ import annotationsin every module
The validation script scores real PR authors from popular repos to check scoring methodology:
GITHUB_TOKEN=$(gh auth token) uv run python scripts/validate_scoring.py --sample-size 5This requires gh CLI authenticated. Results are written to the validation/ directory.
- Create a feature branch from
main - Make your changes with tests
- Ensure
uv run pytest --cov=good_egg -vpasses with >= 90% coverage - Ensure
uv run ruff check src/ tests/ scripts/is clean - Open a PR with a clear description