Research orchestrator that fans a single request out to three peer
runtimes via agent.delegate, demultiplexes their event streams,
tolerates per-peer failure.
Each peer agent is reached over its own bespoke HTTP/SSE endpoint. The orchestrator stands up three separate websockets, parses three different event formats, and writes three retry loops. Trace context is "added later" and never quite makes it across the seam.
$traceId = TraceId::random();
foreach (PEERS as $peer) {
$job = delegate($client, $peer, $request, $traceId);
if ($job->jobId !== null) {
$mux->register($job->jobId);
}
$jobs[] = $job;
}
$completed = array_map(fn ($j) => $mux->collect($j), $jobs);One transport, one envelope shape, one trace. Per-peer failure is a
typed job.failed envelope, not a 502 with a stack trace.
agent.delegate+trace_idpropagation — RFC §14, §17.1.- Job lifecycle (accepted → terminal) — §10.2.
- Stream/event multiplexing across
job_id— §6.4.
main.php— fan-out / gather / synthesize.JobMuxis here.synth.php—synthesize()final-pass LLM stub.
- Bound the fan-out by capability (e.g. only peers advertising
arcpx.research.web.v1). - Return artifact refs from peers (
job.completed.result_ref) instead of inline results when payloads cross the inline budget (§16). - Cancel slowest peer once N succeed via
cancel(see cancellation).