The Orchestrator Pattern: A Manager Agent That Delegates to Specialists
Learn how to build an orchestrator agent that breaks complex tasks into subtasks, delegates to specialist agents, aggregates results, and delivers a unified response using the OpenAI Agents SDK.
What Is the Orchestrator Pattern?
The orchestrator pattern places a single "manager" agent at the top of the hierarchy. This orchestrator does not do the work itself. Instead, it analyzes the incoming request, decomposes it into subtasks, delegates each subtask to a specialist agent, collects the results, and synthesizes a final answer.
This is the most common multi-agent architecture because it maps directly to how human organizations work. A project manager does not write code, design UIs, and configure infrastructure personally. They understand the goal, identify what needs to be done, assign the right people, and combine everyone's output into a deliverable.
Why Not Just Chain Agents Sequentially?
A sequential pipeline — agent A passes to agent B passes to agent C — works when the steps are linear and predictable. But most real tasks are not linear. A user asks "Compare the pricing, features, and customer reviews of products X and Y." This requires three parallel research streams that must be synthesized afterward. A sequential chain would process them one at a time, and the last agent would lack visibility into the first agent's work.
The orchestrator pattern solves this because the orchestrator sees the full picture. It knows which subtasks are needed, which can run in parallel, and how to merge the results.
Building a Basic Orchestrator
Here is a practical example — a research orchestrator that delegates to a web researcher and a data analyst:
from agents import Agent, Runner, function_tool, handoff
@function_tool
def search_web(query: str) -> str:
"""Search the web for information on a topic."""
return f"Web results for '{query}': [article1, article2, article3]"
@function_tool
def analyze_data(dataset: str, question: str) -> str:
"""Analyze a dataset and answer a question about it."""
return f"Analysis of {dataset}: The trend shows 15% growth YoY"
researcher = Agent(
name="Web Researcher",
instructions="""You are a web researcher. Use the search tool to find
relevant information. Return a structured summary of your findings
with sources.""",
tools=[search_web],
)
analyst = Agent(
name="Data Analyst",
instructions="""You are a data analyst. Use the analysis tool to
examine datasets and extract insights. Return findings with
supporting numbers.""",
tools=[analyze_data],
)
orchestrator = Agent(
name="Research Orchestrator",
instructions="""You are a research manager. When given a research
question:
1. Identify whether it needs web research, data analysis, or both.
2. Delegate to the appropriate specialist(s).
3. After receiving specialist outputs, synthesize a unified report.
Always delegate — never try to answer research questions yourself.""",
handoffs=[handoff(researcher), handoff(analyst)],
)
result = Runner.run_sync(
orchestrator,
"What are the latest trends in renewable energy investment?"
)
print(result.final_output)
The orchestrator reads the question, decides it needs web research, and hands off to the Web Researcher. The researcher calls its tools, produces findings, and returns control. The orchestrator then synthesizes the final answer.
Task Routing Logic
The orchestrator's intelligence lies in its routing decisions. There are two approaches to routing:
LLM-based routing — The orchestrator uses the model's reasoning to decide which agent to call. This is flexible but adds latency for the routing decision.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Structured routing with tool calls — You give the orchestrator a routing tool that explicitly selects the target agent:
from agents import Agent, Runner, function_tool, handoff
from pydantic import BaseModel
class TaskPlan(BaseModel):
needs_research: bool
needs_analysis: bool
research_query: str = ""
analysis_question: str = ""
@function_tool
def create_task_plan(
needs_research: bool,
needs_analysis: bool,
research_query: str = "",
analysis_question: str = "",
) -> str:
"""Create a structured plan for which specialists to engage."""
parts = []
if needs_research:
parts.append(f"RESEARCH: {research_query}")
if needs_analysis:
parts.append(f"ANALYSIS: {analysis_question}")
return " | ".join(parts) if parts else "No specialist needed"
orchestrator = Agent(
name="Orchestrator",
instructions="""First call create_task_plan to decide which
specialists are needed. Then delegate to the appropriate agents
based on the plan.""",
tools=[create_task_plan],
handoffs=[handoff(researcher), handoff(analyst)],
)
This gives you an auditable routing decision. You can log the task plan and understand exactly why the orchestrator chose each specialist.
Aggregating Results
After specialists complete their work, the orchestrator must combine their outputs. The SDK handles this naturally — when a specialist finishes, control returns to the orchestrator with the specialist's output in the conversation history. The orchestrator can then reason about the combined results and produce a final synthesis.
For more complex aggregation, you can give the orchestrator a formatting tool:
@function_tool
def format_report(
research_findings: str,
analysis_findings: str,
executive_summary: str,
) -> str:
"""Format specialist findings into a structured report."""
return f"""## Executive Summary
{executive_summary}
## Research Findings
{research_findings}
## Data Analysis
{analysis_findings}"""
This ensures consistent output formatting regardless of how the specialists phrase their results.
Error Handling in Orchestration
Production orchestrators need to handle specialist failures gracefully. If the data analyst cannot process a dataset, the orchestrator should still deliver the web research results rather than failing entirely:
orchestrator = Agent(
name="Resilient Orchestrator",
instructions="""You are a research manager. Delegate tasks to
specialists. If a specialist reports that it cannot complete a task,
acknowledge the gap in your final report rather than failing.
Always deliver whatever results are available.
Example: 'Research findings are included below. Data analysis could
not be completed because the dataset was unavailable.'""",
handoffs=[handoff(researcher), handoff(analyst)],
)
FAQ
How do I prevent the orchestrator from doing work instead of delegating?
Add explicit instructions like "Never answer research questions directly. Always delegate to a specialist." You can reinforce this with a guardrail that checks whether the orchestrator's output contains raw research rather than a synthesis of specialist results.
Can the orchestrator delegate to another orchestrator?
Yes. This creates a hierarchical architecture where a top-level orchestrator delegates to sub-orchestrators, each managing their own team of specialists. This is useful for very complex tasks like "build a complete market analysis" where each section (competitive landscape, financial projections, customer sentiment) deserves its own orchestration layer.
How many specialists should an orchestrator manage?
Keep it under seven. Just like human managers, orchestrator agents become less effective as the number of direct reports grows. The model must reason about which of its many specialists to invoke, and tool/handoff selection accuracy degrades with volume. If you need more than seven specialists, introduce sub-orchestrators.
#OrchestratorPattern #MultiAgentSystems #OpenAIAgentsSDK #TaskDelegation #AgentDesignPatterns #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.