This document provides a comprehensive reference for the Conductor workflow YAML syntax.
- Workflow Configuration
- Agents
- Parallel Groups
- Routes
- Inputs and Outputs
- Limits and Safety
- Tools
- External File References
- Hooks
The top-level workflow section defines metadata and behavior for the entire workflow.
workflow:
name: string # Required: Unique workflow identifier
description: string # Optional: Human-readable description
entry_point: string # Required: Name of first agent to execute
metadata: # Optional: free-form key/value metadata
tracker: ado # surfaced in the workflow_started event
project_url: https://... # CLI --metadata / -m can add or override
instructions: # Optional: extra instruction files (paths)
- ./docs/conventions.md # prepended to every agent prompt
- ./AGENTS.md # also auto-discoverable via
# --workspace-instructions (see CLI ref)
limits:
max_iterations: 10 # Default: 10, max: 500
timeout_seconds: 600 # Optional: Maximum wall-clock time (seconds)
hooks:
on_start: "{{ template }}" # Optional: Expression evaluated on start
on_complete: "{{ template }}" # Optional: Expression evaluated on success
on_error: "{{ template }}" # Optional: Expression evaluated on error
context_mode: accumulate # accumulate | snapshot | minimal (default: accumulate)
runtime:
provider: copilot # copilot | claude
default_model: gpt-5.2
temperature: 0.7
max_tokens: 4096
default_reasoning_effort: medium # Optional: low | medium | high | xhigh
# Workflow-wide default for reasoning /
# extended-thinking effort. Inherited by
# every provider-backed agent unless it
# declares its own `reasoning.effort`.
# See docs/configuration.md#reasoning-effort.Workflow metadata is included verbatim in the workflow_started event and lets downstream consumers (dashboards, queue runners, observability tools) adapt without parsing the YAML. CLI --metadata key=value flags merge on top of YAML metadata (CLI wins on conflicts).
Instructions files are loaded once and prepended to every agent's rendered prompt. They are inherited by sub-workflows and persisted in checkpoints so resume continues to use the same instructions. Use the YAML instructions: list for workflow-pinned context, or pass --workspace-instructions on the CLI to auto-discover AGENTS.md, CLAUDE.md, and .github/copilot-instructions.md by walking from CWD up to the git root.
accumulate(default): Agents see all previous agent outputssnapshot: Agents see only the context at workflow startminimal: Agents see only their direct dependencies
Agents are defined in the agents list. Each agent represents a unit of work.
agents:
- name: string # Required: Unique agent identifier
description: string # Optional: Purpose description
type: agent # agent | human_gate | script | workflow (default: agent)
model: string # Optional: Model identifier (e.g., 'claude-sonnet-4.5')
prompt: | # Required for type=agent: Agent instructions
Multi-line prompt with Jinja2 templates
{{ workflow.input.field }}
{{ previous_agent.output.field }}
input: # Optional: Explicit input declarations
field_name:
from: "{{ expression }}"
type: string # string | number | boolean | array | object
required: true
output: # Optional: Output schema for validation
field_name:
type: string
description: "Field purpose"
tools: # Optional: Agent-specific tools
- tool_name
reasoning: # Optional: per-agent reasoning override
effort: high # low | medium | high | xhigh
# Overrides runtime.default_reasoning_effort.
# Only valid on type=agent (rejected on
# script, human_gate, workflow).
# See docs/configuration.md#reasoning-effort.
routes: # Optional: Routing logic
- to: next_agent # Agent name or $end
when: "{{ condition }}" # Optional: Route conditionHuman gates pause workflow execution for user input:
agents:
- name: approval_gate
type: human_gate
description: "Approve the proposed changes"
options: # Required: List of choices
- name: approve
description: "Approve and proceed"
- name: revise
description: "Request revisions"
- name: reject
description: "Reject the proposal"
routes:
- to: implementer
when: "{{ approval_gate.choice == 'approve' }}"
- to: reviser
when: "{{ approval_gate.choice == 'revise' }}"
- to: $end
when: "{{ approval_gate.choice == 'reject' }}"Gate prompts support full Markdown formatting. In the terminal, prompts are rendered with Rich Markdown (headings, bold, lists, code blocks). In the web dashboard, prompts render as styled HTML with interactive features:
- Headings, bold, lists, code blocks — all standard Markdown syntax is rendered
- Tables — GitHub Flavored Markdown (GFM) pipe tables are supported
- File links — relative file paths in the prompt (e.g.,
./src/plan.md) are auto-detected and rendered as clickable links that open in VS Code - URLs — bare
http://andhttps://URLs are auto-linked
agents:
- name: review_gate
type: human_gate
description: "Review the generated plan"
prompt: |
## Review Required
The planner produced the following artifacts:
| File | Purpose |
|------|---------|
| ./output/plan.md | Implementation plan |
| ./output/timeline.md | Delivery timeline |
Please review the files above and choose how to proceed.
See also: https://wiki.example.com/review-guidelines
options:
- name: approve
description: "Looks good — proceed"
- name: revise
description: "Needs changes"The auto-linkify processor is Markdown-aware: it skips fenced code blocks, inline code spans, and existing markdown links. File paths are validated against the workflow root directory (path traversal is blocked).
Script steps run shell commands as workflow steps, capturing stdout, stderr, and exit code. Use them to integrate shell scripts, run tests, or invoke external tools without an AI agent.
agents:
- name: run_tests
type: script
description: "Run the test suite" # Optional
command: pytest # Required: command to execute (Jinja2 template)
args: # Optional: list of arguments (each Jinja2 template)
- "{{ workflow.input.test_path }}"
- "--verbose"
env: # Optional: environment variables for subprocess
CI: "true"
PYTHONPATH: "/app/src"
working_dir: "/app" # Optional: working directory (Jinja2 template)
timeout: 120 # Optional: per-step timeout in seconds
routes:
- to: analyzer
when: "exit_code == 0"
- to: error_handlerOutput structure — script step output is always available in context as:
| Field | Type | Description |
|---|---|---|
stdout |
string | Captured standard output |
stderr |
string | Captured standard error |
exit_code |
integer | Process exit code (0 = success) |
JSON stdout auto-parsing — if stdout is valid JSON and the parsed value is an object, its fields are merged into the agent's output dict alongside stdout/stderr/exit_code. This lets you route on parsed fields directly instead of opaque exit codes:
# Script writes to stdout: {"route": "planning", "issue_count": 3}
agents:
- name: detector
type: script
command: pwsh
args: ["-File", "{{ workflow.dir }}/scripts/detect.ps1"]
routes:
- to: planner
when: "route == 'planning'" # parsed field
- to: scaler
when: "issue_count > 100" # parsed field
- to: $endJSON arrays and scalars are ignored (only objects merge). Non-JSON stdout is unchanged. Parsed fields shadow stdout/stderr/exit_code if a script outputs those as JSON keys.
Access in downstream agents:
prompt: |
The test run produced:
{{ run_tests.output.stdout }}
Exit code: {{ run_tests.output.exit_code }}Routing on exit code — use exit_code in route conditions to branch on success or failure:
routes:
- to: success_handler
when: "exit_code == 0" # simpleeval syntax
- to: failure_handler
when: "{{ output.exit_code != 0 }}" # Jinja2 syntax
- to: $endRestrictions — script steps cannot have prompt, model, provider, tools, system_prompt, output schema, or options. Script steps also cannot be used inside parallel groups or for_each groups.
Environment variable note — values in env are passed as-is to the subprocess (they are not rendered as Jinja2 templates). Use ${VAR} syntax in the workflow YAML loader if you need environment variable substitution in env values.
Sub-workflow steps reference external workflow YAML files, enabling composable and reusable workflow building blocks. The sub-workflow runs as a black box — its internal agents are not visible to the parent.
agents:
- name: deep_research
type: workflow
workflow: ./research-pipeline.yaml # Required: path to sub-workflow YAML
input: # Optional: explicit input declarations
- workflow.input.topic
input_mapping: # Optional: per-call inputs to the sub-workflow
topic: "{{ workflow.input.topic }}"
depth: "{{ research_planner.output.depth }}"
max_depth: 3 # Optional: per-agent recursion cap
# (additionally bounded by global
# MAX_SUBWORKFLOW_DEPTH = 10)
output: # Optional: output schema for validation
findings:
type: string
routes:
- to: synthesizerKey semantics:
- The
workflowpath is resolved relative to the parent workflow file - Sub-workflow inherits the parent's provider configuration
- Sub-workflow output is stored in context and accessible via
{{ agent_name.output.field }} - Recursive composition is supported (sub-workflows can reference other sub-workflows) with a global depth limit of
MAX_SUBWORKFLOW_DEPTH = 10 - Self-referential sub-workflows (a workflow referencing itself) are allowed; depth is bounded by the global cap and the optional per-agent
max_depthfield input_mappingkeys are sub-workflow input names; each value is a Jinja2 expression evaluated against the parent's context. Wheninput_mappingis omitted, the parent'sworkflow.input.*is forwarded to the sub-workflow as before
Access sub-workflow output in downstream agents:
prompt: |
The research findings were:
{{ deep_research.output.findings }}Sub-workflows in for_each groups — type: workflow agents can be used inside for_each groups to fan out one sub-workflow run per item in the source array. Each iteration receives its own input_mapping evaluated against the loop variable, and emits its own subworkflow_started / subworkflow_completed events:
parallel:
- name: plan_issues
for_each:
source: epic_planner.output.issues
as: issue
max_concurrent: 1
agent:
type: workflow
workflow: ./plan-and-review.yaml
input_mapping:
work_item_id: "{{ issue.id }}"
title: "{{ issue.title }}"Restrictions — workflow steps cannot have prompt, model, provider, tools, system_prompt, command, or options.
Dialog mode allows agents to conditionally pause after execution and enter a free-form conversation with the user. An LLM evaluator examines the agent's output against user-defined criteria and decides whether to initiate a dialog.
agents:
- name: researcher
prompt: "Research the given topic thoroughly"
dialog:
trigger_prompt: |
Enter dialog if the agent expresses uncertainty about
the user's intent, encounters ambiguous requirements,
or needs clarification before proceeding.
routes:
- to: writerWhen triggered, the user is presented with a choice:
- Discuss — engage in a multi-turn conversation with the agent
- Do your best and continue — skip the dialog and let the agent proceed
After the conversation, the agent re-executes with the dialog transcript as additional context, producing a refined output.
Configuration:
| Field | Type | Required | Description |
|---|---|---|---|
dialog.trigger_prompt |
string | Yes | Criteria for the LLM evaluator to decide when dialog is needed |
Behavior notes:
- Dialog is supported on regular
agenttype only (nothuman_gate,script, orworkflow) - In web dashboard mode, the dialog temporarily replaces the graph area with a chat interface
- When
--skip-gatesis set (e.g., CI/automation), dialogs are automatically skipped - The evaluator prompt should describe when to trigger dialog, not what to ask — the evaluator generates the opening question from the agent's output context
- After dialog, the agent sees the full conversation transcript and produces updated output
Parallel groups execute multiple agents concurrently for improved performance.
Execute a fixed list of agents in parallel:
parallel:
- name: string # Required: Group identifier
description: string # Optional: Purpose description
agents: # Required: Agents to run in parallel
- agent_name_1
- agent_name_2
- agent_name_3
failure_mode: fail_fast # Required: Error handling strategy
# Options: fail_fast | continue_on_error | all_or_nothing
routes: # Optional: Routes after parallel execution
- to: next_agent
when: "{{ condition }}"Execute an agent template for each item in an array determined at runtime:
for_each:
- name: string # Required: Group identifier
type: for_each # Required: Marks this as for-each group
description: string # Optional: Purpose description
source: string # Required: Reference to array in context
# Example: "finder.output.items"
as: string # Required: Loop variable name
# Available in templates as {{ <var> }}
# Reserved names: workflow, context, output, _index, _key
agent: # Required: Inline agent definition
model: string # Optional: Model override
prompt: | # Required: Template with {{ <var> }}
Process {{ item }}
Index: {{ _index }} # Zero-based item index
{% if _key is defined %}
Key: {{ _key }} # Extracted key (if key_by specified)
{% endif %}
output: # Optional: Output schema
result: { type: string }
max_concurrent: 10 # Optional: Concurrent execution limit
# Default: 10
failure_mode: fail_fast # Optional: Error handling strategy
# Default: fail_fast
key_by: string # Optional: Path for dict-based outputs
# Example: "item.id" → outputs["123"]
routes: # Optional: Routes after execution
- to: next_agentLoop Variables:
For-each agents have access to special loop variables in addition to the custom loop variable defined by as:
{{ <var_name> }}- Current item from array (e.g.,{{ kpi }},{{ item }}){{ _index }}- Zero-based index of current item (0, 1, 2, ...){{ _key }}- Extracted key value (only ifkey_byis specified)
Reserved Variable Names:
The following names cannot be used for the as parameter:
workflow- Reserved for workflow inputscontext- Reserved for execution metadataoutput- Reserved for agent outputs_index- Reserved for item index_key- Reserved for extracted key
fail_fast(recommended): Stop immediately on first agent failurecontinue_on_error: Run all agents; proceed if at least one succeedsall_or_nothing: Run all agents; fail if any agent fails
Downstream agents can access parallel group outputs using Jinja2 templates:
agents:
- name: summarizer
prompt: |
Summarize the research findings:
Web research: {{ parallel_researchers.outputs.web_researcher.summary }}
Academic research: {{ parallel_researchers.outputs.academic_researcher.summary }}
News research: {{ parallel_researchers.outputs.news_researcher.summary }}Structure:
{{ group_name.outputs.agent_name.field }}- Access successful agent output{{ group_name.errors.agent_name.message }}- Access error details (ifcontinue_on_errormode)
agents:
- name: aggregator
prompt: |
Process these results:
# Index-based access (when key_by not specified)
First result: {{ processors.outputs[0].result }}
Second result: {{ processors.outputs[1].result }}
# Key-based access (when key_by is specified)
KPI-123 result: {{ analyzers.outputs["KPI-123"].analysis }}
# Iterate over all outputs
{% for result in processors.outputs %}
- {{ result | json }}
{% endfor %}
# Access loop metadata
Total processed: {{ processors.outputs | length }}
# Check for errors
{% if processors.errors %}
Failed items: {{ processors.errors | length }}
{% endif %}Structure:
- Without
key_by:{{ group_name.outputs[index].field }}- Array access - With
key_by:{{ group_name.outputs["key"].field }}- Dict access {{ group_name.errors }}- Dict of failed items (ifcontinue_on_errororall_or_nothing)
Routes define workflow control flow. Routes are evaluated in order, and the first matching route is taken.
routes:
- to: next_agent # Agent name or $endroutes:
- to: approver
when: "{{ quality_score >= 8 }}"
- to: reviser
when: "{{ quality_score < 8 }}"
- to: $end # Default fallbackRoutes support Jinja2 templates and simpleeval expressions:
# Jinja2 syntax (recommended)
when: "{{ agent.output.status == 'success' }}"
when: "{{ agent.output.score > 5 and agent.output.valid }}"
# simpleeval syntax (legacy)
when: "status == 'success'"
when: "score > 5 and valid"$end- Terminate workflow successfully- Agent names must match an existing agent or parallel group name
Define expected inputs in the input section:
input:
question:
type: string
required: true
description: "The question to answer"
context:
type: string
required: false
default: "No additional context provided"Access in agents: {{ workflow.input.question }}
Optional inputs without an explicit default resolve to type-appropriate zero values rather than None, so templates render cleanly:
Input type |
Zero value |
|---|---|
string |
"" |
number |
0 |
boolean |
false |
array |
[] |
object |
{} |
This means {{ workflow.input.optional_msg | default("fallback") }} correctly renders "fallback" when optional_msg is omitted, instead of the literal string "None".
In addition to workflow.input.*, every agent has access to:
| Variable | Description |
|---|---|
workflow.name |
Workflow name from the YAML |
workflow.description |
Workflow description from the YAML |
workflow.dir |
Absolute path to the directory containing the workflow YAML |
workflow.file |
Absolute path to the workflow YAML file |
These are available in all context modes (they're metadata, not inputs). workflow.dir is particularly useful for registry-hosted workflows that need to reference co-located scripts or assets without depending on the caller's working directory:
agents:
- name: detector
type: script
command: pwsh
args:
- "-File"
- "{{ workflow.dir }}/scripts/detect-state.ps1"Define the final workflow output:
output:
answer: "{{ answerer.output.answer }}"
confidence: "{{ answerer.output.confidence }}"
sources: "{{ researcher.output.sources }}"Define expected output schema for validation:
agents:
- name: analyzer
output:
score:
type: number
description: "Quality score 1-10"
summary:
type: string
description: "Brief summary"
recommendations:
type: array
description: "List of recommendations"Configure safety limits to prevent runaway workflows:
workflow:
limits:
max_iterations: 50 # Maximum agent executions (1-500, default: 10)
timeout_seconds: 1800 # Maximum wall-clock time in seconds (optional)- Each agent execution counts as 1 iteration
- Parallel agents count individually (3 parallel agents = 3 iterations)
- Loop-back patterns increment the counter on each iteration
- Workflow terminates when
timeout_secondsis exceeded - Includes all agent execution time and overhead
None(default) means no timeout
Tools can be configured at workflow or agent level.
Available to all agents:
tools:
- web_search
- calculatorOverride or extend workflow tools:
agents:
- name: researcher
tools:
- web_search
- arxiv_searchNote: Tool implementation depends on your provider. See provider documentation for available tools.
Tools are typically provided by MCP servers configured in the workflow.runtime.mcp_servers section. MCP tools are automatically made available to agents and can be filtered using the tools field above.
workflow:
runtime:
mcp_servers:
web-search:
command: npx
args: ["-y", "open-websearch@latest"]
tools: ["*"]
agents:
- name: researcher
tools:
- web-search__search # Use specific MCP tool (server__tool format)
prompt: "Research the topic"For full MCP configuration details, see the MCP Tools guide.
The !file YAML tag lets you reference external files from any YAML field value. The file content is transparently inlined during loading, keeping workflow files concise and enabling reuse of prompts, schemas, and configuration across workflows.
Use the !file tag followed by a file path:
field_name: !file path/to/fileThe tag can be used on any scalar YAML value — string fields, output schemas, tool lists, or any other field.
The content of the referenced file is handled based on its structure:
- YAML dict or list — If the file content parses as a YAML mapping or sequence, it is returned as structured data (dict or list). This is useful for output schemas, tool lists, or any structured configuration.
- Scalar or non-YAML — If the file contains a YAML scalar (e.g., a plain string), is not valid YAML, or is a non-YAML format like Markdown, the raw file content is returned as a string.
File paths are resolved relative to the directory containing the YAML file that uses the !file tag, not relative to the current working directory.
project/
├── workflows/
│ └── review.yaml # prompt: !file ../prompts/review.md
├── prompts/
│ └── review.md # ← resolved relative to workflows/
└── schemas/
└── output.yaml
When using load_string() programmatically:
- If
source_pathis provided, paths resolve relative tosource_path.parent - If
source_pathis not provided, paths resolve relative to the current working directory
Keep long prompts in separate Markdown files for easier editing:
# workflow.yaml
agents:
- name: reviewer
model: gpt-4
prompt: !file prompts/review-prompt.md
routes:
- to: $end# prompts/review-prompt.md
You are a code review expert.
Please analyze the following code and provide:
- A summary of what the code does
- Any bugs or issues found
- Suggestions for improvementExtract output schemas into reusable files:
# workflow.yaml
agents:
- name: analyzer
model: gpt-4
prompt: "Analyze the input data"
output: !file schemas/analysis-output.yaml
routes:
- to: $end# schemas/analysis-output.yaml
summary:
type: string
description: A brief summary of the analysis
score:
type: number
description: A confidence score from 1 to 10Share tool configurations across agents:
# workflow.yaml
agents:
- name: researcher
model: gpt-4
prompt: "Research the topic"
tools: !file tools/research-tools.yaml
routes:
- to: $end# tools/research-tools.yaml
- web_search
- arxiv_search
- calculatorIncluded YAML files can themselves contain !file tags. Each nested reference resolves relative to its own file's directory:
# workflow.yaml
agents:
- name: agent1
model: gpt-4
prompt: "Hello"
output: !file schemas/nested.yaml
routes:
- to: $end# schemas/nested.yaml
summary:
type: string
description: !file ../descriptions/summary-desc.md# descriptions/summary-desc.md
A comprehensive summary of the analysis results.Environment variable references (${VAR} or ${VAR:-default}) inside included files are resolved after inclusion, during the standard environment variable resolution pass. This means you can use env vars in external files just as you would inline:
# prompts/greeting.md
Hello ${USER_NAME:-User}, welcome to the system.If a referenced file does not exist, a ConfigurationError is raised with the file path and a suggestion:
ConfigurationError: File not found: 'prompts/missing.md' (resolved to '/absolute/path/prompts/missing.md')
💡 Suggestion: Check the file path is correct relative to the workflow file directory.
If !file tags form a cycle (e.g., file A includes file B which includes file A), a ConfigurationError is raised:
ConfigurationError: Circular file reference detected: 'a.yaml'
File inclusion chain: /path/main.yaml → /path/a.yaml → /path/b.yaml → /path/a.yaml
💡 Suggestion: Remove the circular !file reference.
Only UTF-8 text files are supported. Non-UTF-8 files produce a ConfigurationError with encoding guidance.
- UTF-8 only — Only UTF-8 encoded text files are supported
- No glob patterns — Wildcards like
!file prompts/*.mdare not supported - No URLs — Remote references like
!file https://...are not supported - No conditional includes — File references cannot be parameterized or conditional
- No caching — Each
!filereference reads the file independently
Lifecycle hooks execute template expressions at key workflow events:
workflow:
hooks:
on_start: "{{ 'Starting workflow: ' + workflow.name }}"
on_complete: "{{ 'Workflow completed in ' + str(workflow.execution_time) + 's' }}"
on_error: "{{ 'Workflow failed: ' + workflow.error.message }}"on_start:
workflow.name,workflow.description,workflow.dir,workflow.fileworkflow.input.*(all input values)
on_complete:
- All agent outputs
workflow.execution_time(total seconds)workflow.iteration_count(total iterations)
on_error:
workflow.error.message(error message)workflow.error.agent(agent that failed)- Partial agent outputs (agents that completed before failure)
workflow:
name: code-review
description: Multi-stage code review with parallel validation
entry_point: analyzer
limits:
max_iterations: 20
timeout_seconds: 600
context_mode: accumulate
input:
code:
type: string
required: true
language:
type: string
required: true
tools:
- static_analyzer
agents:
- name: analyzer
model: claude-sonnet-4.5
prompt: |
Analyze this {{ workflow.input.language }} code for issues:
{{ workflow.input.code }}
output:
issues:
type: array
routes:
- to: parallel_validators
parallel:
- name: parallel_validators
agents:
- security_check
- performance_check
- style_check
failure_mode: continue_on_error
routes:
- to: summarizer
agents:
- name: security_check
prompt: "Check for security vulnerabilities: {{ analyzer.output.issues }}"
output:
security_issues:
type: array
- name: performance_check
prompt: "Check for performance issues: {{ analyzer.output.issues }}"
output:
performance_issues:
type: array
- name: style_check
prompt: "Check for style violations: {{ analyzer.output.issues }}"
output:
style_issues:
type: array
- name: summarizer
prompt: |
Summarize findings:
Security: {{ parallel_validators.outputs.security_check.security_issues }}
Performance: {{ parallel_validators.outputs.performance_check.performance_issues }}
Style: {{ parallel_validators.outputs.style_check.style_issues }}
output:
summary:
type: string
routes:
- to: $end
output:
summary: "{{ summarizer.output.summary }}"
all_issues: "{{ analyzer.output.issues }}"- Parallel Execution Guide - Detailed parallel execution patterns
- Examples - Complete workflow examples
- README - Getting started and CLI reference