Skip to content

ArchonMegalon/fleet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2,173 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Codex Fleet Studio Bundle

This bundle deploys six Docker services behind one internal origin:

  • fleet-controller: the disposable codex exec spider / scheduler
  • fleet-studio: a target-scoped design-control plane for project, group, and fleet sessions
  • fleet-admin: the operator console for groups, projects, accounts, routing, publish history, and signoff
  • fleet-auditor: the background scanner that produces findings and candidate tasks
  • fleet-quartermaster: the deterministic capacity plane that turns credits, review debt, and audit debt into a live booster cap
  • fleet-dashboard: an Nginx gateway that keeps one Cloudflare target and serves the public Mission Bridge at /, the authenticated Operator Cockpit at /ops/, plus /admin and /studio

Default networking

  • Docker network: codex-fleet-net
  • Shared origin target for a separate cloudflared container: http://fleet-dashboard:8090
  • Default host URL: http://127.0.0.1:18090
  • Studio URL: http://127.0.0.1:18090/studio

What The Control Plane Adds

Studio lets the admin user:

  • discuss project direction and tradeoffs with a design-oriented agent role
  • draft publishable artifacts without directly editing repo instructions by hand
  • publish approved artifacts into repo-local .codex-studio/published/
  • publish optional feedback notes into feedback/ so coding workers see the decision immediately
  • target a single project, a whole group, or the fleet itself
  • publish coordinated multi-target proposals through proposal.targets

Admin adds:

  • an operator cockpit at /ops/
  • an explorer for raw inventory and control-plane detail at /admin/details
  • group-first operations views
  • account, routing, and project policy controls
  • signoff, refill, audit-now, and publish actions
  • group run history and publish history
  • mission, loop, runway, blocker, and truth-freshness views at /admin
  • raw inventory, lamps, attention feeds, worker tables, and control forms at /admin/details (Explorer)

Auditor adds:

  • repo, milestone, and contract findings
  • candidate tasks that can be approved or auto-published at slice boundaries
  • group-scoped artifacts such as GROUP_BLOCKERS.md and CONTRACT_SETS.yaml
  • a compact machine-readable design-mirror state file at /var/lib/codex-fleet/state/design_mirror_status.json
  • synthesized uncovered-scope task candidates that keep clustered source_items metadata instead of publishing one queue task per bullet

Quartermaster adds:

  • a typed Capacity Plan contract instead of ad hoc burst heuristics
  • deterministic booster caps that consider credit runway, slot posture, useful work depth, review debt, audit debt, and per-project safety caps
  • explicit pool targets for core_authority, core_booster, core_rescue, review_shard, and audit_shard
  • typed incidents such as credit_runway_risk, booster_idle, review_backpressure, audit_debt, scope_contention, and slot_probe_stale
  • a rolling telemetry log at /var/lib/codex-fleet/state/quartermaster/telemetry.jsonl plus a tail API at /api/telemetry-log

Published artifacts can include:

  • VISION.md
  • ROADMAP.md
  • ARCHITECTURE.md
  • runtime-instructions.generated.md
  • QUEUE.generated.yaml
  • WORKPACKAGES.generated.yaml
  • STATUS_PLANE.generated.yaml
  • JOURNEY_GATES.generated.json
  • SUPPORT_CASE_PACKETS.generated.json
  • PROGRESS_REPORT.generated.json
  • PROGRESS_HISTORY.generated.json

Every publish now also writes .codex-studio/published/compile.manifest.json with:

  • desired-state schema version
  • target lifecycle
  • published artifact list
  • stage provenance for design compile / policy compile / execution compile / package compile / capacity compile
  • whether dispatchable truth is actually ready for a runnable repo

The current publish-readiness contract also consumes JOURNEY_GATES.generated.json, which materializes the six release-critical Chummer campaign-OS journeys against live status-plane, support, and progress truth instead of relying on prose-only confidence.

dispatchable_truth_ready is intentionally narrower than lifecycle-complete compile health: it only answers whether the published execution/package truth is runnable against the current queue binding. design_compile and the rest of the lifecycle-required stages remain separate checks in readiness/compile health. For dispatchable and live repos, package compile and capacity compile are now required lifecycle stages, not advisory hints.

Chummer design supervisor

Fleet now includes a repo-local long-running supervisor for the "keep going until the design is materially implemented" loop:

python3 scripts/chummer_design_supervisor.py once
python3 scripts/chummer_design_supervisor.py loop
python3 scripts/chummer_design_supervisor.py status

The supervisor is meant to live in Fleet and be launched from the shell, not reimplemented as an ad hoc shell script each time. It reads the active Chummer design registry, PROGRAM_MILESTONES.yaml, ROADMAP.md, and /docker/fleet/NEXT_SESSION_HANDOFF.md, derives the current open frontier, writes persistent run state under state/chummer_design_supervisor/, and launches bounded codex exec worker runs across:

  • /docker/fleet
  • /docker/chummercomplete
  • /docker/fleet/repos
  • /docker/chummer5a
  • /docker/EA

The default launch helper is:

bash scripts/run_chummer_design_supervisor.sh

Fleet compose now includes fleet-design-supervisor, so a normal docker compose up -d boot owns restart-on-reboot for the loop instead of relying on a tmux shell. The helper also exposes steering and account rotation inputs from env:

CHUMMER_DESIGN_SUPERVISOR_FOCUS_OWNER=chummer6-ui bash scripts/run_chummer_design_supervisor.sh
CHUMMER_DESIGN_SUPERVISOR_FOCUS_TEXT=desktop,client bash scripts/run_chummer_design_supervisor.sh

For EA / OneMinAI lanes, the supervisor now routes each shard dynamically across healthy configured accounts from config/accounts.yaml based on lane and model, then backs off credential sources that are usage-limited or auth-stale. CHUMMER_DESIGN_SUPERVISOR_ACCOUNT_ALIASES and CHUMMER_DESIGN_SUPERVISOR_SHARD_ACCOUNT_GROUPS are explicit pinning overrides, not the normal routing path.

In Fleet main, compile.manifest.json, STATUS_PLANE.generated.yaml, and SUPPORT_CASE_PACKETS.generated.json are treated as committed public control artifacts. Downstream repos may still materialize equivalent publish/runtime artifacts locally until their own promotion flow catches up.

Chummer release-control split

For Chummer repo work, Fleet is the release/control-plane owner, not the installer or registry owner.

  • chummer6-core emits runtime-bundle truth and fingerprints
  • chummer6-ui emits desktop bundles plus installer/updater-ready artifacts
  • Fleet expands the release matrix, runs verify/signoff/promotion orchestration, and materializes registry publication projections
  • chummer6-hub-registry owns promoted release-channel truth, install/update metadata, and compatibility projections
  • chummer6-hub consumes that registry truth for /downloads and signed-in install UX

The Fleet-side helper for that path is scripts/materialize_chummer_release_registry_projection.py, and downstream guide/release consumers now read the registry-owned projection instead of a Hub-local legacy releases.json source.

Fleet main now also supports consuming the registry owner repo at runtime instead of only reading the committed projection file.

  • set CHUMMER_HUB_REGISTRY_BASE_URL=http://host.docker.internal:18091 in runtime.env when the local registry service is running on the host
  • or set CHUMMER_RELEASE_REGISTRY_CURRENT_URL to the exact /api/v1/registry/release-channel/current endpoint
  • if neither is configured, Fleet falls back to chummer6-hub-registry/.codex-studio/published/RELEASE_CHANNEL.generated.json

Compiled mission model

Fleet now treats modeled truth and dispatchable truth separately.

  • design compile: canonical design artifacts become approved repo or group outputs
  • policy compile: approved artifacts become queue overlays, runtime instructions, blocker files, and review guidance
  • execution compile: policy outputs become concrete dispatchable truth for controller, auditor, healer, and review lanes
  • package compile: execution truth is cut into conflict-checked work packages, scope claims, and queue-bound package overlays
  • capacity compile: package truth is compiled into an explicit booster/review/audit capacity plan without giving merge authority to the worker pool

Project and group configs also carry lifecycle / maturity:

  • planned
  • scaffold
  • dispatchable
  • live
  • signoff_only

Lifecycle is not the same thing as readiness. Fleet now derives a separate readiness ladder from repo evidence:

  • repo_local_complete
  • package_canonical
  • boundary_pure
  • publicly_promoted

Those labels are evidence-backed, not handwritten. Runtime completion, compile manifests, design boundary-purity canon, and deployment promotion state each contribute separate checks, so a repo can no longer be treated as "done" just because its queue is empty or a preview URL exists.

Lockstep and runway pressure are computed from dispatch-participating members only, so scaffold repos still receive audit and design attention without distorting live mission posture.

The spider now reads published Studio artifacts before coding. If WORKPACKAGES.generated.yaml is present with a matching queue fingerprint, it becomes the first-class package overlay; otherwise QUEUE.generated.yaml overlays the configured queue.

Queue overlay format

mode: append   # append | prepend | replace
items:
  - tighten hub route contracts
  - add approval review UI for generated assets

Multi-account support

Both the spider and Studio can use different Codex identities. Map aliases in config/accounts.yaml to either:

  • API keys (auth_kind: api_key)
  • ChatGPT auth caches (auth_kind: chatgpt_auth_json)

Studio defaults to acct-studio-a and then falls back to acct-shared-b, but you can change that through the split desired-state config in config/accounts.yaml, config/routing.yaml, and the relevant project policy file under config/projects/.

Important operational rule:

  • if multiple aliases share the same ChatGPT auth.json, treat them as one effective human account lane
  • with one real ChatGPT-authenticated account, the safe fleet-wide parallelism is 1
  • raise the global parallel cap only when you actually add another distinct account or API-key pool

Project routing now supports:

  • preferred / burst / reserve account lanes
  • Spark eligibility filtering
  • group captain policy for priority, service floors, and shed order
  • slice-boundary refill from approved auditor tasks
  • GitHub-backed Codex review gating after local verify
  • evidence-driven route classification (classification_mode: evidence_v1) using recent run outcomes instead of keywords alone

Participant burst lanes can now be sponsored by Hub user/group sessions instead of existing only as operator-local state. Fleet persists the Hub-side sponsor metadata on each dynamic participant lane, keeps the cheap groundwork loop as the default path, and emits signed contribution receipts back to Hub after lane activation, premium slice claim, landed slices, and lane stop/revoke. The controlled participant-first canary path currently lives on bounded product repos such as core and hub; the public status contract now publishes that canary posture explicitly. The Fleet self-project remains operator-only by default and keeps ChatGPT as emergency fallback rather than the normal execution lane.

Codex refresh policy

Fleet coding slices do not use the host codex install. codex exec runs inside the fleet-controller container, and Studio uses the fleet-studio container image. Both images install @openai/codex during Docker build.

This repo now includes a fleet-rebuilder sidecar that refreshes those images on a daily UTC schedule. It rebuilds fleet-controller, fleet-studio, and fleet-dashboard by default, forces a recreate so the new CLI becomes live, rotates a CODEX_NPM_REFRESH_TOKEN build arg so the Codex npm layer is not stuck behind Docker's build cache, and canary-rolls the first configured service before widening the refresh across the remaining bridge services. The same sidecar now also runs a bounded auto-heal loop for repeated unhealthy controller/dashboard states so runtime health drift is corrected by the control plane instead of staying a purely manual operator chore.

Configure the schedule in runtime.env:

FLEET_REBUILD_ENABLED=true
FLEET_REBUILD_HOUR_UTC=04
FLEET_REBUILD_MINUTE_UTC=15
FLEET_REBUILD_SERVICES="fleet-controller fleet-studio fleet-quartermaster fleet-dashboard"
FLEET_REBUILD_CANARY_ENABLED=true
FLEET_REBUILD_CANARY_SERVICES="fleet-controller"
FLEET_REBUILD_CANARY_TIMEOUT_SECONDS=180
FLEET_AUTOHEAL_ENABLED=true
FLEET_AUTOHEAL_SERVICES="fleet-controller fleet-dashboard"
FLEET_AUTOHEAL_THRESHOLD=2
FLEET_AUTOHEAL_COOLDOWN_SECONDS=300
FLEET_AUTOHEAL_TIMEOUT_SECONDS=120
FLEET_CONTROLLER_HEARTBEAT_MAX_AGE_SECONDS=45
FLEET_COMPOSE_PROJECT_NAME=fleet
FLEET_AUTOHEAL_ESCALATE_AFTER_RESTARTS=3
FLEET_AUTOHEAL_ESCALATE_WINDOW_SECONDS=1800

Gateway /health is now a static gateway liveness response, while controller health uses a bounded local HTTP probe plus a recent heartbeat file. That keeps gateway health from collapsing just because controller latency spikes, while still allowing the auto-heal loop to restart the controller when its own liveness drifts beyond the heartbeat grace window. The rebuilder and healer also pin Compose to the stable fleet project name so scheduled rebuilds and restarts target the live stack instead of accidentally creating a second /workspace project namespace. When repeated restarts exceed the configured restart budget inside the escalation window, the healer now stops pretending the problem is self-clearing and publishes an explicit escalation state instead of looping forever. Those escalations now materialize as first-class Fleet incidents and feed cockpit publish-readiness, and the canonical scripts/deploy.sh admin-status path now reads the internal admin plane first before falling back to the public gateway.

Chummer6 Guide Refresh

The Chummer6 public guide can now be refreshed and published as one governed run:

/docker/fleet/scripts/run_chummer6_guide_refresh.sh

That one-shot pipeline:

  • regenerates the EA guide packet
  • renders the guide media pack
  • finishes the downstream Chummer6 repo
  • commits and pushes main on success

To keep the guide current without manual babysitting, install the weekly host cron entry:

/docker/fleet/scripts/install_weekly_chummer6_guide_refresh.sh

The default schedule is Sunday at 05:30 UTC. Override it before install with:

export CHUMMER6_GUIDE_REFRESH_WEEKDAY_UTC=0
export CHUMMER6_GUIDE_REFRESH_HOUR_UTC=05
export CHUMMER6_GUIDE_REFRESH_MINUTE_UTC=30

Browser-facing operator access is also configured in runtime.env:

FLEET_OPERATOR_AUTH_REQUIRED=true
FLEET_OPERATOR_USER=operator
FLEET_OPERATOR_PASSWORD=replace-with-a-strong-password

Optional Hub receipt ingest wiring for sponsored participant lanes:

FLEET_HUB_LEDGER_RECEIPT_URL=http://chummer-run-api:8080/api/v1/ledger/receipts
FLEET_HUB_AI_RECEIPT_URL=http://chummer-run-ai:8080/api/v1/ai/booster/receipts
FLEET_RECEIPT_SIGNING_SECRET=replace-with-a-long-random-secret

Those URLs are used only for sponsored participant-burst receipts. Fleet still owns lane execution and jury landing; Hub owns the canonical fact/reward/entitlement ledger.

When enabled, /admin, /admin/details, /studio, /api/admin/*, /api/cockpit/*, and /api/studio/* require the shared operator login served from /admin/login.

Auto-heal now uses explicit category playbooks in config/policies.yaml for:

  • coverage
  • review
  • capacity
  • contracts

Each playbook can define deterministic steps, whether an LLM fallback is allowed, whether verify is required, and how many bounded attempts happen before escalation.

GitHub review lane

Fleet review now defaults to a GitHub-native lane instead of local codex exec review.

The Fleet self-project is the intentional exception. Its project policy stays on review.mode: local with trigger: local so the cheap groundwork -> review_light -> jury loop can run end-to-end without waiting on a PR review round-trip while the stack is self-hosting.

The runtime flow is:

  1. worker finishes a coding slice and passes local verify
  2. fleet commits and pushes a review branch
  3. fleet creates or updates a draft PR
  4. fleet requests Codex review with @codex review ...
  5. if the GitHub review lane is throttled or degraded long enough, fleet runs a bounded local fallback review and records the result before escalating

Review fallback defaults:

  • mark the lane degraded when GitHub is throttled, when 3+ projects are waiting, or when the oldest wait exceeds 45 minutes
  • launch local fallback review after 45 minutes in a degraded lane or after 2 missed wake-up checks
  • attempt at most 1 local fallback review per PR head before escalating
  1. if fewer than 2 Codex workers are active and no coding slice is dispatchable, fleet backfills with local review work first and queue-generation audits second
  2. fleet ingests PR review findings back into project feedback and operator views

Important constraints:

  • the separate review bucket comes from GitHub Codex review, not from a local prompt like review my code
  • local review should be treated as fallback-only for ordinary projects; the Fleet self-project uses local review by design
  • queue advance is gated on the GitHub review result when project review is enabled

The controller and admin containers read GitHub auth from a mounted hosts.yml at /run/gh/hosts.yml, typically provided from ${HOME}/.config/gh.

EA runtime truth on Fleet surfaces

Fleet now treats EA's provider registry as the canonical runtime read model for lane posture.

  • /v1/codex/profiles and /v1/responses/_provider_health expose provider_registry
  • /v1/providers/registry exposes the same lane/provider/capability contract directly
  • Mission Bridge and Operator Cockpit now show active and recent worker posture with lane, provider, backend, brain, capacity state, and slot ownership

Health-aware dispatch now follows that same truth instead of separate handwritten summaries:

  • groundwork and easy stay health-routed through the EA/OneMinAI path and show slot posture directly
  • review_light and jury stay audit-grade lanes with explicit provider/backend posture instead of hidden fallthrough
  • core stays isolated for the heavy implementation slices; it does not silently absorb ordinary cheap-loop work
  • when mission policy or live capacity says a lane is unavailable, Fleet waits or requeues instead of opening a degraded premium path implicitly
  • when CHUMMER_DESIGN_SUPERVISOR_PREFER_FULL_EA_LANES=1 is enabled for a premium burst, Fleet prefers the full hard-capable OneMinAI account pool first and keeps repair / survival as reserve lanes instead of active premium routing

For the Fleet self-project, local review handoff also stays serialized by design: one slice moves through groundwork -> jury -> groundwork ... -> jury -> land, and nonstop operation keeps that loop moving without bypassing review or merge authority.

Deploy

Run the installer from a full Fleet source checkout. It copies the full compose bundle (controller, studio, admin, auditor, gateway, scripts, and the split config tree) into the install directory, preserves operator-managed accounts.yaml / runtime env files unless --force is set, retargets the Fleet self-project from /docker/fleet to the installed bundle path, mounts that installed path into the running services, validates the self-project files referenced by design_doc and verify_cmd, then waits for the compose services plus the dashboard /health and /api/status checks to come up cleanly.

./deploy-fleet.sh

Or choose a different host port/network:

./deploy-fleet.sh --host-port 18190 --network-name my-fleet-net

Common operations

Restart after config changes:

cd /opt/codex-fleet && docker compose up -d --build

Check the controller dashboard API:

curl http://127.0.0.1:18090/api/status

If deploy-fleet.sh exits non-zero, inspect the emitted docker compose ps / docker compose logs output first; the installer now fails closed when the packaged bundle does not boot cleanly.

Packaged installs are now self-contained for the Fleet self-project. A clean host no longer needs a separate /docker/fleet checkout just to make the installed controller target itself.

Run a nonstop project loop that keeps one project continuously dispatching:

python3 scripts/fleet_codex_nonstop.py <project-id>

If another nonstop loop is already running for the same <project-id>, the second process exits with a clear lock message.

Options let you tolerate breaks without exiting:

python3 scripts/fleet_codex_nonstop.py <project-id> --include-review --include-signoff --max-idle-ticks 0

Use --never-stop to keep cycling through breaks (review, signoff, cooldown, no remaining work) without ending the process:

python3 scripts/fleet_codex_nonstop.py <project-id> --never-stop

Set --max-idle-ticks to a positive number to stop after that many consecutive empty ticks; leave it at 0 for indefinite nonstop operation.

For the Fleet self-project, indefinite nonstop mode is the intended posture. The loop keeps one project dispatching continuously, but still serializes local verify, jury review, and landing so the cheap loop cannot create a second always-hot writer behind the reviewer.

Install the local Codex launch shims tracked in this repo:

bash scripts/install_codex_and_codexea_shims.sh

That installs:

  • ~/bin/codex for the normal local wrapper
  • ~/.local/bin/codexea for the Codex + EA MCP wrapper
  • ~/.local/bin/codexaudit for the EA audit/jury wrapper
  • ~/.local/bin/codexea-watchdog for the CodexEA idle-nudge wrapper
  • ~/.local/bin/codexsurvival for the EA survival backup wrapper
  • ~/.codex/prompts/ea_interactive_bootstrap.md for the EA interactive bootstrap prompt
  • ~/.codex/prompts/ea_survival_bootstrap.md for the EA survival bootstrap prompt
  • an ea-mcp Codex MCP server entry pointing at scripts/ea_mcp_bridge.py

Default behavior:

  • codex now prepends the EA interactive bootstrap by default when that prompt file is installed, keeps the normal built-in OpenAI / ChatGPT model path unless you explicitly override it, and biases ordinary sessions toward EA MCP tools and Gemini-backed structured work before spending on long local turns.
  • codexea now locks ordinary sessions to the EA easy Responses path by default, treats --interactive as a compatibility alias for the plain TUI path, prefers the live /v1/codex/profiles model for that lane when EA reports one, emits a startup Trace: line that reflects the real provider path, and still rejects ad hoc model/provider/profile overrides on the locked easy path. If EA easy is unhealthy, that failure does not imply a fallback to the built-in ChatGPT provider.
  • The codexea shim and scripts/codexea_route.py now auto-read Fleet runtime.ea.env / runtime.env for the EA base URL, bearer token, and principal when those vars are not already exported, so Fleet-local EA connection settings apply without extra shell exports.
  • codexaudit now pins the EA jury lane, routes to ea-audit-jury, disables the cheap post-audit loop so audit sessions do not recursively self-review, and probes the BrowserAct audit path up front. If the audit connector is missing it now fails fast with a short error instead of dropping you into a JSON-only dead end. Set CODEXAUDIT_ALLOW_SOFT_FALLBACK=1 to degrade to ea-coder-fast explicitly when you still want a non-jury fallback.
  • codexea credits and codexea onemin now force a live /v1/codex/status?refresh=1 aggregate for the 1min pool, including slot count, free/max credits, percent left, current-pace ETA, the 7-day average-burn runway, owner-ledger matches, and latest explicit probe results. Add --json for scripting, or --probe-all to run POST /v1/providers/onemin/probe-all before rendering.
  • If live EA auth is unavailable but Fleet has a persisted runtime cache, codexea credits / codexea onemin now fall back to that local cache and print a short note with the cache timestamp instead of failing outright. --probe-all remains best-effort in that mode and will warn when the fresh probe could not be run.
  • codexea onemin/credits includes an optional live top-up refresh pass via POST /v1/providers/onemin/billing-refresh (--billing, enabled by default in the live codexea shell) that adds:
    • last browser refresh timestamp
    • binding count processed
    • direct API account attempt/skip counts
    • billing/member reconciliation counts
    • top-up ETA and amount from parsed usage snapshots
    • codexea credits and codexea onemin both keep the lighter default refresh behavior; add --billing-full-refresh only when you explicitly want a full direct-API account sweep that keeps going after per-account 429 results
    • when the live refresh produces no new snapshots but the aggregate already has cached billing truth, the human-readable output collapses that refresh block into one short cached-state note instead of leading with an empty diagnostics section To disable this pass, set CODEXEA_CREDITS_INCLUDE_BILLING=0.
  • Example:
1min billing refresh
Bindings: 3
API accounts: 2 configured, 2 attempted, 0 skipped
Billing snapshots: 3
Member reconciliations: 2
Direct API refresh: billing 2 | members 2
Direct API refresh: rate-limited, throttled
Next top-up at: 2026-03-31T00:00:00Z
Top-up amount: 2000000
Hours until top-up: 320.5

1min aggregate
...
  • Toggle output:
# default (billing lookup enabled)
$ CODEXEA_CREDITS_INCLUDE_BILLING=1 codexea onemin
1min billing refresh
Bindings: 3
...

# billing lookup disabled
$ CODEXEA_CREDITS_INCLUDE_BILLING=0 codexea onemin
1min aggregate
...
  • Set CODEXEA_MODE=responses or CODEXEA_MODE=mcp only when debugging an explicit non-easy lane. On ordinary codexea easy runs the shim rejects CODEXEA_MODE unless CODEXEA_ALLOW_EASY_MODE_OVERRIDE=1 is set deliberately for debugging.
  • CODEXEA_BASE_PROFILE still applies to explicit MCP runs, but ordinary codexea sessions no longer rely on a separate base profile to stay off the built-in provider path.
  • Set CODEX_PREFER_EA_MCP=0 or CODEX_WRAPPER_DISABLE_BOOTSTRAP=1 if you need one plain session without the EA MCP bootstrap.
  • Set CODEX_FORCE_DEFAULT_PROVIDER=0 only if you intentionally want the normal codex wrapper to stop forcing the built-in OpenAI provider for ordinary runs.
  • Set CODEXEA_INTERACTIVE_ALWAYS_EASY=0 only if you intentionally want codexea to resume using the full lane router for ordinary interactive sessions; otherwise completed interactive runs now emit a compact async post-audit packet to ea-review-light.

Use codexsurvival for slow backup work against EA's ea-coder-survival alias. It is best suited to bounded codex exec style runs because EA's survival lane is background/poll oriented in v1.

Bare codexea sessions keep the watchdog off by default. Set CODEXEA_ENABLE_WATCHDOG=1 if you want the idle-nudge wrapper, or run codexea-watchdog directly and override its behavior with CODEXEA_WATCHDOG_INTERVAL or CODEXEA_WATCHDOG_PROMPT.

Check Studio sessions:

curl http://127.0.0.1:18090/api/studio/status

Run the cross-file consistency guard:

python3 scripts/check_consistency.py

Check the operator status feeds:

curl http://127.0.0.1:18090/api/admin/status
curl http://127.0.0.1:18090/api/cockpit/mission-board
curl http://127.0.0.1:18090/api/cockpit/blocker-forecast

Request or sync a review manually:

curl -X POST http://127.0.0.1:18090/api/admin/projects/core/review/request
curl -X POST http://127.0.0.1:18090/api/admin/projects/core/review/sync

Connect an existing Cloudflare container to the shared network once:

docker network connect codex-fleet-net <cloudflared-container>

Recommended admin flow

  1. Open /ops/ and read the Operator Cockpit first: current slice, next transition, mission runway, blockers, truth freshness, and support-route posture should tell you what is moving and what stops next.
  2. Resolve the top approval or bottleneck from the Operator Cockpit before opening raw tables.
  3. Use /admin/details as the Explorer for Projects, Groups, Reviews, Audit, Milestones, Accounts, Routing, History, Studio, and Settings when you need inventory-level inspection, lifecycle/compile detail, or policy edits.
  4. Open /studio for a project, group, or fleet target when you need scoped design or planning help; /admin now previews pending Studio publish items without forcing a page jump for common approvals.
  5. Use the GitHub review lane from /admin to request, retrigger, or sync Codex review when queue advance is gated on PR review.
  6. Let the spider continue coding slices; it will ingest published runtime instructions, feedback notes, review findings, design mirrors, compile manifests, and queue or work-package overlays automatically.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors