A domain-neutral engine for multidimensional evaluation under explicit policy assumptions.
The engine:
- applies a configurable policy
- evaluates candidate configurations
- computes scores using rule-based mappings
- evaluates constraint-based admissibility
- derives interpretation indicators from score thresholds
- supports structured comparison across alternative designs
The system is designed to make assumptions explicit and results inspectable.
This project does not advocate for a specific solution.
It provides a structured way to examine:
- how design choices affect outcomes
- where tradeoffs become significant
- how governance assumptions shape results
This project provides a reusable framework for multidimensional evaluation under explicit assumptions and constraints. It:
- supports policy-driven evaluation across multiple domains
- represents inputs as typed factors (binary, numeric, categorical)
- applies constraint rules and score rules defined in policy
- separates input structure, policy logic, and evaluation
The goal is to provide a stable core that can support multiple exploratory systems built on a shared evaluation model.
The contribution is the engine for structured multidimensional evaluation, not the specific values used in any given scenario.
- Factors and their structure are explicitly defined
- Scoring and constraints are policy-driven
- Assumptions are explicit and inspectable
- Results are comparative and scenario-dependent
- The core logic is domain-neutral
This project does not determine outcomes or recommend decisions. It provides a way to examine how different assumptions and constraints shape results.
Working files are found in these areas:
- docs/ - documentation and examples
- src/ - implementation
- Loads policy definitions (factor specs, constraint rules, score rules)
- Evaluates candidates using typed factor values
- Computes score profiles, admissibility, and interpretation indicators
- Supports reusable integration into domain-specific explorer systems
Show command reference
After you get a copy of this repo in your own GitHub account,
open a machine terminal in your Repos folder:
# Replace username with YOUR GitHub username.
git clone https://github.com/username/multidimensional-evaluation-engine
cd multidimensional-evaluation-engine
code .# Set Up the Environment
uv self update
uv python pin 3.14
uv sync --extra dev --extra docs --upgrade
uvx pre-commit install
# Local format + lint
uv run ruff format --check .
uv run ruff check .
# Pre-commit (enforce repo rules)
git add -A
uvx pre-commit run --all-files
# repeat if changes were made
git add -A
uvx pre-commit run --all-files
# Static + security + dependency checks
uv run validate-pyproject pyproject.toml
uv run deptry .
uv run bandit -c pyproject.toml -r src
uv run pyright
# Tests (after static checks pass)
uv run pytest --cov=src --cov-report=term-missing
# Docs build (after everything passes)
uv run zensical build
# Commit and push
git add -A
git commit -m "update"
git push -u origin main
# Reinstall + sanity checks (post-push validation)
uv sync --reinstall
uv run python -c "import multidimensional_evaluation_engine; print(multidimensional_evaluation_engine.__version__)"
uv run python -c "from multidimensional_evaluation_engine.evaluation.evaluator import evaluate_candidate; print(evaluate_candidate)"
# Build artifacts (verify release)
uv build