Skip to content

wyl116/genworker

Repository files navigation

genworker

CI Python Mode Transport

The local-first runtime for digital workers that hold roles, follow rules, and evolve under governance.

English | 中文

Quick Start | API | Configuration | Deployment | Architecture

genworker is a filesystem-first runtime for digital employee and digital worker scenarios.

It does not wrap a generic agent with a prompt and keep it running indefinitely. Instead, it treats the role as the primary object in the system:

  • Who the role is
  • What the role is responsible for
  • What the role can and cannot do
  • Who the role collaborates with
  • How knowledge learned by the role enters the system

By default, genworker keeps the core capabilities for multiple workers, skills, tools, MCP, sessions, and autonomous runtime execution while minimizing infrastructure requirements:

  • Redis is not required by default
  • MySQL is not required by default
  • OpenViking is not required by default
  • IM channels are not enabled by default
  • The default runtime only depends on local workspace/ and configs/

Core Characteristics

  • Role-first, not agent-first: The primary entity is a role or worker, not an agent instance.
  • Organization-aware: Multiple AI workers are not just multi-agent messaging; they have responsibility boundaries, collaboration relationships, and ownership-based routing.
  • Governable by design: Permissions, auditability, trust levels, and human-in-the-loop controls are runtime capabilities, not prompt-only conventions.
  • Learning with approval: Learning follows a proposal, review, and activation flow instead of being automatically persisted and applied.
  • Goal-driven autonomy: Proactive behavior is driven by structured goals and state deviation, not only by cron schedules.
  • One runtime, many triggers: Conversations, tasks, events, and inspections share the same worker execution pipeline.

What You Can Do With It

  • Deploy multiple digital workers for an organization, with each worker occupying a role and owning clear responsibilities and boundaries.
  • Let the same role handle conversations, API tasks, event responses, and autonomous inspections.
  • Isolate role experience and customer data across layers to avoid cross-tenant or cross-customer data leakage.
  • Let AI learn new rules while keeping activation inside a governed proposal, review, and activation lifecycle.
  • Trigger runtime actions based on goal deviation instead of only executing scripts when timers fire.
  • Bring the full local flow online with HTTP/SSE, workspace files, configuration templates, and debug endpoints.

How It Is Different

Dimension Common personal assistants / generic agents genworker
Primary entity One user's agent or workspace A role or worker inside an organization
Role definition Prompt plus tool configuration A system-registered role object
Memory boundary A global memory pool around "me" Layered isolation between role experience and tenant data
Learning model Automatically persisted and often automatically activated Proposal, review, activation, and decay lifecycle
Multi-role collaboration Multi-agent messaging or routing Responsibility boundaries, collaboration relationships, and ownership routing
Proactivity Cron or scheduled triggers Goal-driven and state-deviation-driven triggers
Work modes Conversation, task, event, and inspection flows are often implemented separately Conversations, tasks, events, and inspections share one execution pipeline

Best Fit

  • Organizations that need multiple digital workers, each occupying a role with clear responsibility boundaries.
  • A single role that should handle conversations, tasks, event responses, and autonomous inspections through one runtime.
  • Systems that need role experience and customer data isolated across tenants and customers.
  • Business workflows that require reviewable rules, auditable actions, and traceable critical operations.
  • Teams that want to validate the full runtime locally before introducing more complex infrastructure.

Not For

  • A personal AI companion or personal inbox aggregator.
  • Ungoverned self-learning agents where learned behavior becomes active automatically.
  • One-off creative tasks that do not need long-lived roles or organizational boundaries.
  • Lightweight prototypes that stitch together separate systems for conversations, tasks, events, and inspections.

If your goal is a personal AI companion, projects such as Hermes or OpenClaw may be a better fit. If your goal is a governable, traceable digital worker that serves an organization and occupies a role, genworker is the better fit.

Documentation

What You Get

  • HTTP chat entry point: POST /api/v1/chat/stream
  • Worker task stream entry point: POST /api/v1/worker/task/stream
  • Health check: GET /health
  • Readiness check: GET /readiness
  • Runtime diagnostics: GET /api/v1/debug/runtime
  • Local worker, skill, and persona loading
  • Filesystem-based sessions and workspace runtime mode
  • Optional Redis, OpenViking, and IM-channel enhancements

Default Operating Model

The default operating model is intentionally direct:

  • Use configs/ for layered configuration.
  • Use workspace/ for tenants, workers, skills, and personas.
  • Start the runtime with python start.py.
  • Inspect the current state through /health, /readiness, and /api/v1/debug/runtime.

This means you can validate the system on a regular development machine first, then decide whether to introduce a reverse proxy, external storage, or a more complex deployment topology.

Three-Minute Quick Start

1. Install

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

2. Prepare Config

Minimal runnable configuration:

cp configs/config.example.env configs/config_local.env

If you already prefer maintaining only configs/config_local.env, edit it directly. The runtime reads layered configuration under configs/ first. The root .env.example mainly serves containers, CI, or external launch wrappers; it is not the primary configuration entry point for start.py.

3. Start

python start.py

The default mode is lightweight and local:

  • RUNTIME_PROFILE=local
  • REDIS_ENABLED=false
  • MYSQL_ENABLED=false
  • OPENVIKING_ENABLED=false
  • IM_CHANNEL_ENABLED=false

4. Verify

curl -s http://127.0.0.1:8000/health
curl -s http://127.0.0.1:8000/readiness
curl -s http://127.0.0.1:8000/api/v1/debug/runtime

If /readiness returns success and /api/v1/debug/runtime shows the default worker and current profile, the main runtime path is up.

First Requests

Chat stream example:

curl -s -N -X POST "http://127.0.0.1:8000/api/v1/chat/stream" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Hello, help me summarize what I should prioritize today",
    "thread_id": "chat-001",
    "tenant_id": "demo",
    "worker_id": "analyst-01"
  }'

Task stream example:

curl -s -N -X POST "http://127.0.0.1:8000/api/v1/worker/task/stream" \
  -H "Content-Type: application/json" \
  -d '{
    "task": "Check my inbox and organize my todos",
    "tenant_id": "demo",
    "worker_id": "analyst-01"
  }'

See docs/API.md for more API details.

Configuration

See docs/CONFIGURATION.md for configuration details.

Key rules:

  • Configuration files are read from the project-root configs/ directory and do not depend on the current shell directory.
  • If LOG_DIR is a relative path, it is resolved against the project root.
  • The default workspace root is fixed to <project>/workspace.
  • start.py switches back to the project root before startup to avoid path drift when launched from another directory.

Reference templates:

  • .env.example
  • configs/config.example.env
  • configs/profiles/local.env
  • configs/profiles/local_memory.env
  • configs/profiles/advanced.env
  • configs/profiles/enterprise.env

Core Runtime Model

It is useful to think of genworker as four layers:

  1. Entry Layer: HTTP / SSE / IM / Event / Scheduler
  2. Runtime Layer: WorkerRouter, Session, Task, Memory, ToolPipeline
  3. Workspace Layer: tenant, worker, skill, and persona definitions in workspace/
  4. Infra Layer: Redis, OpenViking, MySQL, external platforms, and proxy layers

The default local mode only requires the first three layers.

Runtime Profiles

Profile Purpose Redis MySQL OpenViking IM
local Minimal local development and debugging off off off off
local_memory Local filesystem plus semantic memory experiments off off on off
advanced Enhanced runtime on off off off
enterprise Full enterprise template on on off on

These profiles are templates; they do not lock in your deployment model. Process environment variables still have the highest priority.

Repository Layout

.
├── configs/                  # Layered configuration and profile templates
├── docs/                     # Architecture and configuration documentation
├── frontend/                 # Frontend static assets
├── src/                      # Runtime implementation
├── tests/                    # Unit and integration tests
├── workspace/                # Default runtime workspace
├── workspace.example/        # Example workspace template
└── start.py                  # Local startup entry point

Development Notes

  • Start with python start.py.
  • Set API_BEARER_TOKEN or API_KEY if you need to protect API endpoints.
  • /health only checks whether the process is alive; /readiness checks whether the default main runtime path can serve traffic.
  • workspace.example/ is useful for initializing a new worker directory structure.
  • tests/ includes both unit and integration tests and is a good regression baseline for downstream development.

See docs/ARCHITECTURE.md for architecture details.

Packages

 
 
 

Contributors