This bundle deploys six Docker services behind one internal origin:
fleet-controller: the disposablecodex execspider / schedulerfleet-studio: a target-scoped design-control plane for project, group, and fleet sessionsfleet-admin: the operator console for groups, projects, accounts, routing, publish history, and signofffleet-auditor: the background scanner that produces findings and candidate tasksfleet-quartermaster: the deterministic capacity plane that turns credits, review debt, and audit debt into a live booster capfleet-dashboard: an Nginx gateway that keeps one Cloudflare target and serves the public Mission Bridge at/, the authenticated Operator Cockpit at/ops/, plus/adminand/studio
- Docker network:
codex-fleet-net - Shared origin target for a separate
cloudflaredcontainer:http://fleet-dashboard:8090 - Default host URL:
http://127.0.0.1:18090 - Studio URL:
http://127.0.0.1:18090/studio
Studio lets the admin user:
- discuss project direction and tradeoffs with a design-oriented agent role
- draft publishable artifacts without directly editing repo instructions by hand
- publish approved artifacts into repo-local
.codex-studio/published/ - publish optional feedback notes into
feedback/so coding workers see the decision immediately - target a single project, a whole group, or the fleet itself
- publish coordinated multi-target proposals through
proposal.targets
Admin adds:
- an operator cockpit at
/ops/ - an explorer for raw inventory and control-plane detail at
/admin/details - group-first operations views
- account, routing, and project policy controls
- signoff, refill, audit-now, and publish actions
- group run history and publish history
- mission, loop, runway, blocker, and truth-freshness views at
/admin - raw inventory, lamps, attention feeds, worker tables, and control forms at
/admin/details(Explorer)
Auditor adds:
- repo, milestone, and contract findings
- candidate tasks that can be approved or auto-published at slice boundaries
- group-scoped artifacts such as
GROUP_BLOCKERS.mdandCONTRACT_SETS.yaml - a compact machine-readable design-mirror state file at
/var/lib/codex-fleet/state/design_mirror_status.json - synthesized uncovered-scope task candidates that keep clustered
source_itemsmetadata instead of publishing one queue task per bullet
Quartermaster adds:
- a typed
Capacity Plancontract instead of ad hoc burst heuristics - deterministic booster caps that consider credit runway, slot posture, useful work depth, review debt, audit debt, and per-project safety caps
- explicit pool targets for
core_authority,core_booster,core_rescue,review_shard, andaudit_shard - typed incidents such as
credit_runway_risk,booster_idle,review_backpressure,audit_debt,scope_contention, andslot_probe_stale - a rolling telemetry log at
/var/lib/codex-fleet/state/quartermaster/telemetry.jsonlplus a tail API at/api/telemetry-log
Published artifacts can include:
VISION.mdROADMAP.mdARCHITECTURE.mdruntime-instructions.generated.mdQUEUE.generated.yamlWORKPACKAGES.generated.yamlSTATUS_PLANE.generated.yamlJOURNEY_GATES.generated.jsonSUPPORT_CASE_PACKETS.generated.jsonPROGRESS_REPORT.generated.jsonPROGRESS_HISTORY.generated.json
Every publish now also writes .codex-studio/published/compile.manifest.json with:
- desired-state schema version
- target lifecycle
- published artifact list
- stage provenance for design compile / policy compile / execution compile / package compile / capacity compile
- whether dispatchable truth is actually ready for a runnable repo
The current publish-readiness contract also consumes JOURNEY_GATES.generated.json, which materializes the six release-critical Chummer campaign-OS journeys against live status-plane, support, and progress truth instead of relying on prose-only confidence.
dispatchable_truth_ready is intentionally narrower than lifecycle-complete compile health:
it only answers whether the published execution/package truth is runnable against the current queue binding.
design_compile and the rest of the lifecycle-required stages remain separate checks in readiness/compile health.
For dispatchable and live repos, package compile and capacity compile are now required lifecycle stages, not advisory hints.
Fleet now includes a repo-local long-running supervisor for the "keep going until the design is materially implemented" loop:
python3 scripts/chummer_design_supervisor.py once
python3 scripts/chummer_design_supervisor.py loop
python3 scripts/chummer_design_supervisor.py statusThe supervisor is meant to live in Fleet and be launched from the shell, not reimplemented as an ad hoc shell script each time.
It reads the active Chummer design registry, PROGRAM_MILESTONES.yaml, ROADMAP.md, and /docker/fleet/NEXT_SESSION_HANDOFF.md, derives the current open frontier, writes persistent run state under state/chummer_design_supervisor/, and launches bounded codex exec worker runs across:
/docker/fleet/docker/chummercomplete/docker/fleet/repos/docker/chummer5a/docker/EA
The default launch helper is:
bash scripts/run_chummer_design_supervisor.shFleet compose now includes fleet-design-supervisor, so a normal docker compose up -d boot owns restart-on-reboot for the loop instead of relying on a tmux shell.
The helper also exposes steering and account rotation inputs from env:
CHUMMER_DESIGN_SUPERVISOR_FOCUS_OWNER=chummer6-ui bash scripts/run_chummer_design_supervisor.sh
CHUMMER_DESIGN_SUPERVISOR_FOCUS_TEXT=desktop,client bash scripts/run_chummer_design_supervisor.shFor EA / OneMinAI lanes, the supervisor now routes each shard dynamically across healthy configured accounts from config/accounts.yaml based on lane and model, then backs off credential sources that are usage-limited or auth-stale. CHUMMER_DESIGN_SUPERVISOR_ACCOUNT_ALIASES and CHUMMER_DESIGN_SUPERVISOR_SHARD_ACCOUNT_GROUPS are explicit pinning overrides, not the normal routing path.
In Fleet main, compile.manifest.json, STATUS_PLANE.generated.yaml, and SUPPORT_CASE_PACKETS.generated.json are treated as committed public control artifacts. Downstream repos may still materialize equivalent publish/runtime artifacts locally until their own promotion flow catches up.
For Chummer repo work, Fleet is the release/control-plane owner, not the installer or registry owner.
chummer6-coreemits runtime-bundle truth and fingerprintschummer6-uiemits desktop bundles plus installer/updater-ready artifacts- Fleet expands the release matrix, runs verify/signoff/promotion orchestration, and materializes registry publication projections
chummer6-hub-registryowns promoted release-channel truth, install/update metadata, and compatibility projectionschummer6-hubconsumes that registry truth for/downloadsand signed-in install UX
The Fleet-side helper for that path is scripts/materialize_chummer_release_registry_projection.py, and downstream guide/release consumers now read the registry-owned projection instead of a Hub-local legacy releases.json source.
Fleet main now also supports consuming the registry owner repo at runtime instead of only reading the committed projection file.
- set
CHUMMER_HUB_REGISTRY_BASE_URL=http://host.docker.internal:18091inruntime.envwhen the local registry service is running on the host - or set
CHUMMER_RELEASE_REGISTRY_CURRENT_URLto the exact/api/v1/registry/release-channel/currentendpoint - if neither is configured, Fleet falls back to
chummer6-hub-registry/.codex-studio/published/RELEASE_CHANNEL.generated.json
Fleet now treats modeled truth and dispatchable truth separately.
- design compile: canonical design artifacts become approved repo or group outputs
- policy compile: approved artifacts become queue overlays, runtime instructions, blocker files, and review guidance
- execution compile: policy outputs become concrete dispatchable truth for controller, auditor, healer, and review lanes
- package compile: execution truth is cut into conflict-checked work packages, scope claims, and queue-bound package overlays
- capacity compile: package truth is compiled into an explicit booster/review/audit capacity plan without giving merge authority to the worker pool
Project and group configs also carry lifecycle / maturity:
plannedscaffolddispatchablelivesignoff_only
Lifecycle is not the same thing as readiness. Fleet now derives a separate readiness ladder from repo evidence:
repo_local_completepackage_canonicalboundary_purepublicly_promoted
Those labels are evidence-backed, not handwritten. Runtime completion, compile manifests, design boundary-purity canon, and deployment promotion state each contribute separate checks, so a repo can no longer be treated as "done" just because its queue is empty or a preview URL exists.
Lockstep and runway pressure are computed from dispatch-participating members only, so scaffold repos still receive audit and design attention without distorting live mission posture.
The spider now reads published Studio artifacts before coding. If WORKPACKAGES.generated.yaml is present with a matching queue fingerprint, it becomes the first-class package overlay; otherwise QUEUE.generated.yaml overlays the configured queue.
mode: append # append | prepend | replace
items:
- tighten hub route contracts
- add approval review UI for generated assetsBoth the spider and Studio can use different Codex identities. Map aliases in config/accounts.yaml to either:
- API keys (
auth_kind: api_key) - ChatGPT auth caches (
auth_kind: chatgpt_auth_json)
Studio defaults to acct-studio-a and then falls back to acct-shared-b, but you can change that through the split desired-state config in config/accounts.yaml, config/routing.yaml, and the relevant project policy file under config/projects/.
Important operational rule:
- if multiple aliases share the same ChatGPT
auth.json, treat them as one effective human account lane - with one real ChatGPT-authenticated account, the safe fleet-wide parallelism is
1 - raise the global parallel cap only when you actually add another distinct account or API-key pool
Project routing now supports:
- preferred / burst / reserve account lanes
- Spark eligibility filtering
- group captain policy for priority, service floors, and shed order
- slice-boundary refill from approved auditor tasks
- GitHub-backed Codex review gating after local verify
- evidence-driven route classification (
classification_mode: evidence_v1) using recent run outcomes instead of keywords alone
Participant burst lanes can now be sponsored by Hub user/group sessions instead of existing only as operator-local state. Fleet persists the Hub-side sponsor metadata on each dynamic participant lane, keeps the cheap groundwork loop as the default path, and emits signed contribution receipts back to Hub after lane activation, premium slice claim, landed slices, and lane stop/revoke.
The controlled participant-first canary path currently lives on bounded product repos such as core and hub; the public status contract now publishes that canary posture explicitly. The Fleet self-project remains operator-only by default and keeps ChatGPT as emergency fallback rather than the normal execution lane.
Fleet coding slices do not use the host codex install. codex exec runs inside the
fleet-controller container, and Studio uses the fleet-studio container image. Both images install
@openai/codex during Docker build.
This repo now includes a fleet-rebuilder sidecar that refreshes those images on a daily UTC schedule.
It rebuilds fleet-controller, fleet-studio, and fleet-dashboard by default, forces a recreate so
the new CLI becomes live, rotates a CODEX_NPM_REFRESH_TOKEN build arg so the Codex npm layer is
not stuck behind Docker's build cache, and canary-rolls the first configured service before widening
the refresh across the remaining bridge services. The same sidecar now also runs a bounded auto-heal
loop for repeated unhealthy controller/dashboard states so runtime health drift is corrected by the
control plane instead of staying a purely manual operator chore.
Configure the schedule in runtime.env:
FLEET_REBUILD_ENABLED=true
FLEET_REBUILD_HOUR_UTC=04
FLEET_REBUILD_MINUTE_UTC=15
FLEET_REBUILD_SERVICES="fleet-controller fleet-studio fleet-quartermaster fleet-dashboard"
FLEET_REBUILD_CANARY_ENABLED=true
FLEET_REBUILD_CANARY_SERVICES="fleet-controller"
FLEET_REBUILD_CANARY_TIMEOUT_SECONDS=180
FLEET_AUTOHEAL_ENABLED=true
FLEET_AUTOHEAL_SERVICES="fleet-controller fleet-dashboard"
FLEET_AUTOHEAL_THRESHOLD=2
FLEET_AUTOHEAL_COOLDOWN_SECONDS=300
FLEET_AUTOHEAL_TIMEOUT_SECONDS=120
FLEET_CONTROLLER_HEARTBEAT_MAX_AGE_SECONDS=45
FLEET_COMPOSE_PROJECT_NAME=fleet
FLEET_AUTOHEAL_ESCALATE_AFTER_RESTARTS=3
FLEET_AUTOHEAL_ESCALATE_WINDOW_SECONDS=1800Gateway /health is now a static gateway liveness response, while controller health uses a bounded
local HTTP probe plus a recent heartbeat file. That keeps gateway health from collapsing just because
controller latency spikes, while still allowing the auto-heal loop to restart the controller when its
own liveness drifts beyond the heartbeat grace window. The rebuilder and healer also pin Compose to
the stable fleet project name so scheduled rebuilds and restarts target the live stack instead of
accidentally creating a second /workspace project namespace. When repeated restarts exceed the
configured restart budget inside the escalation window, the healer now stops pretending the problem
is self-clearing and publishes an explicit escalation state instead of looping forever. Those
escalations now materialize as first-class Fleet incidents and feed cockpit publish-readiness, and
the canonical scripts/deploy.sh admin-status path now reads the internal admin plane first before
falling back to the public gateway.
The Chummer6 public guide can now be refreshed and published as one governed run:
/docker/fleet/scripts/run_chummer6_guide_refresh.shThat one-shot pipeline:
- regenerates the EA guide packet
- renders the guide media pack
- finishes the downstream
Chummer6repo - commits and pushes
mainon success
To keep the guide current without manual babysitting, install the weekly host cron entry:
/docker/fleet/scripts/install_weekly_chummer6_guide_refresh.shThe default schedule is Sunday at 05:30 UTC. Override it before install with:
export CHUMMER6_GUIDE_REFRESH_WEEKDAY_UTC=0
export CHUMMER6_GUIDE_REFRESH_HOUR_UTC=05
export CHUMMER6_GUIDE_REFRESH_MINUTE_UTC=30Browser-facing operator access is also configured in runtime.env:
FLEET_OPERATOR_AUTH_REQUIRED=true
FLEET_OPERATOR_USER=operator
FLEET_OPERATOR_PASSWORD=replace-with-a-strong-passwordOptional Hub receipt ingest wiring for sponsored participant lanes:
FLEET_HUB_LEDGER_RECEIPT_URL=http://chummer-run-api:8080/api/v1/ledger/receipts
FLEET_HUB_AI_RECEIPT_URL=http://chummer-run-ai:8080/api/v1/ai/booster/receipts
FLEET_RECEIPT_SIGNING_SECRET=replace-with-a-long-random-secretThose URLs are used only for sponsored participant-burst receipts. Fleet still owns lane execution and jury landing; Hub owns the canonical fact/reward/entitlement ledger.
When enabled, /admin, /admin/details, /studio, /api/admin/*, /api/cockpit/*, and /api/studio/* require the shared operator login served from /admin/login.
Auto-heal now uses explicit category playbooks in config/policies.yaml for:
coveragereviewcapacitycontracts
Each playbook can define deterministic steps, whether an LLM fallback is allowed, whether verify is required, and how many bounded attempts happen before escalation.
Fleet review now defaults to a GitHub-native lane instead of local codex exec review.
The Fleet self-project is the intentional exception. Its project policy stays on review.mode: local with trigger: local so the cheap groundwork -> review_light -> jury loop can run end-to-end without waiting on a PR review round-trip while the stack is self-hosting.
The runtime flow is:
- worker finishes a coding slice and passes local verify
- fleet commits and pushes a review branch
- fleet creates or updates a draft PR
- fleet requests Codex review with
@codex review ... - if the GitHub review lane is throttled or degraded long enough, fleet runs a bounded local fallback review and records the result before escalating
Review fallback defaults:
- mark the lane degraded when GitHub is throttled, when
3+projects are waiting, or when the oldest wait exceeds45minutes - launch local fallback review after
45minutes in a degraded lane or after2missed wake-up checks - attempt at most
1local fallback review per PR head before escalating
- if fewer than
2Codex workers are active and no coding slice is dispatchable, fleet backfills with local review work first and queue-generation audits second - fleet ingests PR review findings back into project feedback and operator views
Important constraints:
- the separate review bucket comes from GitHub Codex review, not from a local prompt like
review my code - local review should be treated as fallback-only for ordinary projects; the Fleet self-project uses local review by design
- queue advance is gated on the GitHub review result when project review is enabled
The controller and admin containers read GitHub auth from a mounted hosts.yml at /run/gh/hosts.yml, typically provided from ${HOME}/.config/gh.
Fleet now treats EA's provider registry as the canonical runtime read model for lane posture.
/v1/codex/profilesand/v1/responses/_provider_healthexposeprovider_registry/v1/providers/registryexposes the same lane/provider/capability contract directly- Mission Bridge and Operator Cockpit now show active and recent worker posture with lane, provider, backend, brain, capacity state, and slot ownership
Health-aware dispatch now follows that same truth instead of separate handwritten summaries:
groundworkandeasystay health-routed through the EA/OneMinAI path and show slot posture directlyreview_lightandjurystay audit-grade lanes with explicit provider/backend posture instead of hidden fallthroughcorestays isolated for the heavy implementation slices; it does not silently absorb ordinary cheap-loop work- when mission policy or live capacity says a lane is unavailable, Fleet waits or requeues instead of opening a degraded premium path implicitly
- when
CHUMMER_DESIGN_SUPERVISOR_PREFER_FULL_EA_LANES=1is enabled for a premium burst, Fleet prefers the full hard-capable OneMinAI account pool first and keepsrepair/survivalas reserve lanes instead of active premium routing
For the Fleet self-project, local review handoff also stays serialized by design: one slice moves through groundwork -> jury -> groundwork ... -> jury -> land, and nonstop operation keeps that loop moving without bypassing review or merge authority.
Run the installer from a full Fleet source checkout. It copies the full compose bundle (controller, studio, admin, auditor, gateway, scripts, and the split config tree) into the install directory, preserves operator-managed accounts.yaml / runtime env files unless --force is set, retargets the Fleet self-project from /docker/fleet to the installed bundle path, mounts that installed path into the running services, validates the self-project files referenced by design_doc and verify_cmd, then waits for the compose services plus the dashboard /health and /api/status checks to come up cleanly.
./deploy-fleet.shOr choose a different host port/network:
./deploy-fleet.sh --host-port 18190 --network-name my-fleet-netRestart after config changes:
cd /opt/codex-fleet && docker compose up -d --buildCheck the controller dashboard API:
curl http://127.0.0.1:18090/api/statusIf deploy-fleet.sh exits non-zero, inspect the emitted docker compose ps / docker compose logs output first; the installer now fails closed when the packaged bundle does not boot cleanly.
Packaged installs are now self-contained for the Fleet self-project. A clean host no longer needs a separate /docker/fleet checkout just to make the installed controller target itself.
Run a nonstop project loop that keeps one project continuously dispatching:
python3 scripts/fleet_codex_nonstop.py <project-id>If another nonstop loop is already running for the same <project-id>, the second process exits with a clear lock message.
Options let you tolerate breaks without exiting:
python3 scripts/fleet_codex_nonstop.py <project-id> --include-review --include-signoff --max-idle-ticks 0Use --never-stop to keep cycling through breaks (review, signoff, cooldown, no remaining work) without ending the process:
python3 scripts/fleet_codex_nonstop.py <project-id> --never-stopSet --max-idle-ticks to a positive number to stop after that many consecutive empty ticks; leave it at 0 for indefinite nonstop operation.
For the Fleet self-project, indefinite nonstop mode is the intended posture. The loop keeps one project dispatching continuously, but still serializes local verify, jury review, and landing so the cheap loop cannot create a second always-hot writer behind the reviewer.
Install the local Codex launch shims tracked in this repo:
bash scripts/install_codex_and_codexea_shims.shThat installs:
~/bin/codexfor the normal local wrapper~/.local/bin/codexeafor the Codex + EA MCP wrapper~/.local/bin/codexauditfor the EA audit/jury wrapper~/.local/bin/codexea-watchdogfor the CodexEA idle-nudge wrapper~/.local/bin/codexsurvivalfor the EA survival backup wrapper~/.codex/prompts/ea_interactive_bootstrap.mdfor the EA interactive bootstrap prompt~/.codex/prompts/ea_survival_bootstrap.mdfor the EA survival bootstrap prompt- an
ea-mcpCodex MCP server entry pointing atscripts/ea_mcp_bridge.py
Default behavior:
codexnow prepends the EA interactive bootstrap by default when that prompt file is installed, keeps the normal built-in OpenAI / ChatGPT model path unless you explicitly override it, and biases ordinary sessions toward EA MCP tools and Gemini-backed structured work before spending on long local turns.codexeanow locks ordinary sessions to the EAeasyResponses path by default, treats--interactiveas a compatibility alias for the plain TUI path, prefers the live/v1/codex/profilesmodel for that lane when EA reports one, emits a startupTrace:line that reflects the real provider path, and still rejects ad hoc model/provider/profile overrides on the locked easy path. If EAeasyis unhealthy, that failure does not imply a fallback to the built-in ChatGPT provider.- The
codexeashim andscripts/codexea_route.pynow auto-read Fleetruntime.ea.env/runtime.envfor the EA base URL, bearer token, and principal when those vars are not already exported, so Fleet-local EA connection settings apply without extra shell exports. codexauditnow pins the EAjurylane, routes toea-audit-jury, disables the cheap post-audit loop so audit sessions do not recursively self-review, and probes the BrowserAct audit path up front. If the audit connector is missing it now fails fast with a short error instead of dropping you into a JSON-only dead end. SetCODEXAUDIT_ALLOW_SOFT_FALLBACK=1to degrade toea-coder-fastexplicitly when you still want a non-jury fallback.codexea creditsandcodexea oneminnow force a live/v1/codex/status?refresh=1aggregate for the 1min pool, including slot count, free/max credits, percent left, current-pace ETA, the 7-day average-burn runway, owner-ledger matches, and latest explicit probe results. Add--jsonfor scripting, or--probe-allto runPOST /v1/providers/onemin/probe-allbefore rendering.- If live EA auth is unavailable but Fleet has a persisted runtime cache,
codexea credits/codexea oneminnow fall back to that local cache and print a short note with the cache timestamp instead of failing outright.--probe-allremains best-effort in that mode and will warn when the fresh probe could not be run. codexea onemin/creditsincludes an optional live top-up refresh pass viaPOST /v1/providers/onemin/billing-refresh(--billing, enabled by default in the livecodexeashell) that adds:- last browser refresh timestamp
- binding count processed
- direct API account attempt/skip counts
- billing/member reconciliation counts
- top-up ETA and amount from parsed usage snapshots
codexea creditsandcodexea oneminboth keep the lighter default refresh behavior; add--billing-full-refreshonly when you explicitly want a full direct-API account sweep that keeps going after per-account429results- when the live refresh produces no new snapshots but the aggregate already has cached billing truth, the human-readable output collapses that refresh block into one short cached-state note instead of leading with an empty diagnostics section
To disable this pass, set
CODEXEA_CREDITS_INCLUDE_BILLING=0.
- Example:
1min billing refresh
Bindings: 3
API accounts: 2 configured, 2 attempted, 0 skipped
Billing snapshots: 3
Member reconciliations: 2
Direct API refresh: billing 2 | members 2
Direct API refresh: rate-limited, throttled
Next top-up at: 2026-03-31T00:00:00Z
Top-up amount: 2000000
Hours until top-up: 320.5
1min aggregate
...
- Toggle output:
# default (billing lookup enabled)
$ CODEXEA_CREDITS_INCLUDE_BILLING=1 codexea onemin
1min billing refresh
Bindings: 3
...
# billing lookup disabled
$ CODEXEA_CREDITS_INCLUDE_BILLING=0 codexea onemin
1min aggregate
...
- Set
CODEXEA_MODE=responsesorCODEXEA_MODE=mcponly when debugging an explicit non-easy lane. On ordinarycodexeaeasy runs the shim rejectsCODEXEA_MODEunlessCODEXEA_ALLOW_EASY_MODE_OVERRIDE=1is set deliberately for debugging. CODEXEA_BASE_PROFILEstill applies to explicit MCP runs, but ordinarycodexeasessions no longer rely on a separate base profile to stay off the built-in provider path.- Set
CODEX_PREFER_EA_MCP=0orCODEX_WRAPPER_DISABLE_BOOTSTRAP=1if you need one plain session without the EA MCP bootstrap. - Set
CODEX_FORCE_DEFAULT_PROVIDER=0only if you intentionally want the normalcodexwrapper to stop forcing the built-in OpenAI provider for ordinary runs. - Set
CODEXEA_INTERACTIVE_ALWAYS_EASY=0only if you intentionally wantcodexeato resume using the full lane router for ordinary interactive sessions; otherwise completed interactive runs now emit a compact async post-audit packet toea-review-light.
Use codexsurvival for slow backup work against EA's ea-coder-survival alias. It is best suited to bounded codex exec style runs because EA's survival lane is background/poll oriented in v1.
Bare codexea sessions keep the watchdog off by default. Set CODEXEA_ENABLE_WATCHDOG=1 if you want the idle-nudge wrapper, or run codexea-watchdog directly and override its behavior with CODEXEA_WATCHDOG_INTERVAL or CODEXEA_WATCHDOG_PROMPT.
Check Studio sessions:
curl http://127.0.0.1:18090/api/studio/statusRun the cross-file consistency guard:
python3 scripts/check_consistency.pyCheck the operator status feeds:
curl http://127.0.0.1:18090/api/admin/status
curl http://127.0.0.1:18090/api/cockpit/mission-board
curl http://127.0.0.1:18090/api/cockpit/blocker-forecastRequest or sync a review manually:
curl -X POST http://127.0.0.1:18090/api/admin/projects/core/review/request
curl -X POST http://127.0.0.1:18090/api/admin/projects/core/review/syncConnect an existing Cloudflare container to the shared network once:
docker network connect codex-fleet-net <cloudflared-container>- Open
/ops/and read the Operator Cockpit first: current slice, next transition, mission runway, blockers, truth freshness, and support-route posture should tell you what is moving and what stops next. - Resolve the top approval or bottleneck from the Operator Cockpit before opening raw tables.
- Use
/admin/detailsas the Explorer for Projects, Groups, Reviews, Audit, Milestones, Accounts, Routing, History, Studio, and Settings when you need inventory-level inspection, lifecycle/compile detail, or policy edits. - Open
/studiofor a project, group, or fleet target when you need scoped design or planning help;/adminnow previews pending Studio publish items without forcing a page jump for common approvals. - Use the GitHub review lane from
/adminto request, retrigger, or sync Codex review when queue advance is gated on PR review. - Let the spider continue coding slices; it will ingest published runtime instructions, feedback notes, review findings, design mirrors, compile manifests, and queue or work-package overlays automatically.