CrewAI Multi-Agent Tutorial: Role-Based Agent Teams for Complex Tasks
Hands-on CrewAI tutorial covering agent definitions with roles, goals, and backstories, task creation, sequential and hierarchical processes, and delegation patterns.
What CrewAI Brings to Multi-Agent Systems
Most agent frameworks focus on a single agent doing multiple things. CrewAI takes a different approach: it lets you define a team of specialized agents, each with a distinct role, goal, and backstory, working together on tasks. This mirrors how human teams work — a researcher gathers information, an analyst interprets it, and a writer produces the deliverable.
The role-based architecture makes it easy to build complex workflows without writing complex orchestration code. You define who your agents are, what they should do, and how they should collaborate. CrewAI handles the communication, task delegation, and output passing between agents.
Defining Agents with Roles
Every CrewAI agent has three core attributes: role (their job title), goal (what they are trying to achieve), and backstory (context that shapes their behavior). The backstory is surprisingly important — it gives the LLM persona-specific context that improves output quality.
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0.1)
# Agent 1: Market Researcher
researcher = Agent(
role="Senior Market Research Analyst",
goal="Discover and analyze the latest market trends, "
"competitive landscape, and emerging opportunities "
"in the target industry",
backstory="""You are a seasoned market research analyst with
15 years of experience at McKinsey and Bain. You specialize
in technology markets and have a reputation for finding
non-obvious insights that drive strategic decisions. You
always back your findings with data and credible sources.""",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm=llm,
verbose=True,
allow_delegation=True,
)
# Agent 2: Data Analyst
analyst = Agent(
role="Quantitative Data Analyst",
goal="Transform raw research data into actionable insights "
"with clear metrics, trends, and projections",
backstory="""You are a data analyst with deep expertise in
statistical analysis and financial modeling. You spent 8 years
at Goldman Sachs before moving to tech. You never present a
number without context — every metric comes with a trend line,
comparison, and confidence interval.""",
llm=llm,
verbose=True,
allow_delegation=False,
)
# Agent 3: Report Writer
writer = Agent(
role="Executive Report Writer",
goal="Produce polished, executive-ready reports that "
"communicate complex findings clearly and persuasively",
backstory="""You are a communications specialist who has
written reports for Fortune 500 C-suites for a decade. Your
writing is crisp, data-driven, and action-oriented. You
structure every report with an executive summary, key
findings, detailed analysis, and specific recommendations.""",
llm=llm,
verbose=True,
allow_delegation=False,
)
Creating Tasks
Tasks define what each agent should do. Each task has a description, an expected output format, and is assigned to a specific agent. Tasks can depend on each other — the output of one task becomes the context for the next.
# Task 1: Research
research_task = Task(
description="""Conduct comprehensive market research on the
AI agent framework market in 2026. Investigate:
1. Market size and growth projections
2. Key players and their market share
3. Emerging trends and technologies
4. Customer adoption patterns
5. Investment and funding landscape
Focus on factual, sourced data. Include specific numbers,
company names, and dates.""",
expected_output="""A detailed research brief with:
- Market size figures with sources
- Competitive landscape with at least 8 companies
- 5 key trends with supporting evidence
- Customer adoption statistics""",
agent=researcher,
)
# Task 2: Analysis (depends on research)
analysis_task = Task(
description="""Using the market research provided, perform
quantitative analysis:
1. Calculate market growth rates (CAGR)
2. Segment the market by use case and geography
3. Build a competitive positioning matrix
4. Identify the top 3 investment opportunities
5. Project market size for 2027-2030
Use specific numbers and show your methodology.""",
expected_output="""An analytical report with:
- Growth rate calculations
- Market segmentation breakdown
- Competitive positioning analysis
- Investment opportunity scoring
- Revenue projections with assumptions""",
agent=analyst,
context=[research_task], # Receives output from research
)
# Task 3: Report Writing (depends on analysis)
report_task = Task(
description="""Create a polished executive report based on
the research and analysis provided. The report should be
structured for a board of directors audience.
Include:
1. Executive summary (1 paragraph)
2. Market overview with key metrics
3. Competitive analysis with visual-ready data
4. Strategic recommendations (3-5 specific actions)
5. Risk factors and mitigation strategies""",
expected_output="""A complete executive report in markdown
format, ready for presentation. 2000-3000 words with
clear section headers and bullet points for key data.""",
agent=writer,
context=[research_task, analysis_task],
output_file="market_report.md",
)
Process Types: Sequential vs Hierarchical
CrewAI supports two execution processes. Sequential runs tasks in order — task 1 completes, then task 2 starts with task 1's output, and so on. Hierarchical introduces a manager agent that delegates tasks dynamically and can re-assign work based on results.
# Sequential process (default)
sequential_crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
process=Process.sequential,
verbose=True,
)
result = sequential_crew.kickoff()
print(result.raw)
# Hierarchical process (manager delegates)
hierarchical_crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4o", temperature=0),
verbose=True,
)
result = hierarchical_crew.kickoff()
In hierarchical mode, CrewAI creates a manager agent that reads all task descriptions and decides which agent should handle each task. The manager can re-delegate if an agent's output does not meet the expected quality. This is powerful for complex workflows where the optimal execution order is not obvious.
Custom Tools for CrewAI Agents
Real agents need domain-specific tools. CrewAI tools are simple classes with a name, description, and run method.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import httpx
class StockPriceInput(BaseModel):
ticker: str = Field(description="Stock ticker symbol")
class StockPriceTool(BaseTool):
name: str = "stock_price_lookup"
description: str = "Get the current stock price for a given ticker symbol"
args_schema: type[BaseModel] = StockPriceInput
def _run(self, ticker: str) -> str:
response = httpx.get(
f"https://api.example.com/stock/{ticker}"
)
data = response.json()
return f"{ticker}: ${data['price']:.2f} ({data['change']:+.2f}%)"
class DatabaseQueryInput(BaseModel):
query: str = Field(description="SQL query to execute")
class DatabaseQueryTool(BaseTool):
name: str = "query_database"
description: str = "Execute a read-only SQL query against the company database"
args_schema: type[BaseModel] = DatabaseQueryInput
def _run(self, query: str) -> str:
if not query.strip().upper().startswith("SELECT"):
return "Error: Only SELECT queries are allowed"
# Execute query against your database
import sqlite3
conn = sqlite3.connect("company.db")
cursor = conn.execute(query)
rows = cursor.fetchall()
columns = [desc[0] for desc in cursor.description]
conn.close()
return str([dict(zip(columns, row)) for row in rows])
# Assign tools to agents
financial_analyst = Agent(
role="Financial Analyst",
goal="Analyze financial data and market conditions",
backstory="Expert financial analyst with CFA certification",
tools=[StockPriceTool(), DatabaseQueryTool()],
llm=llm,
)
Delegation Patterns
When allow_delegation is True, an agent can ask another agent for help. This enables organic collaboration — the researcher might ask the analyst to verify a number, or the writer might ask the researcher for additional context.
# Enable selective delegation
researcher_with_delegation = Agent(
role="Lead Researcher",
goal="Produce comprehensive, verified research",
backstory="Research lead who delegates verification tasks",
tools=[SerperDevTool()],
llm=llm,
allow_delegation=True, # Can delegate to other agents
)
fact_checker = Agent(
role="Fact Checker",
goal="Verify claims and data accuracy",
backstory="Meticulous fact checker who cross-references sources",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm=llm,
allow_delegation=False, # Terminal agent, no further delegation
)
Memory and Context Management
CrewAI supports three types of memory that improve agent performance across tasks and conversations.
crew_with_memory = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
process=Process.sequential,
memory=True, # Enable all memory types
embedder={
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
verbose=True,
)
Short-term memory holds the current task execution context. Long-term memory persists across crew executions, allowing agents to learn from past runs. Entity memory tracks key entities (people, companies, products) mentioned during execution and maintains consistent references.
Error Handling and Retry Logic
Production CrewAI deployments need robust error handling. Configure max retries and set up callbacks to monitor execution.
from crewai import Crew
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
process=Process.sequential,
max_rpm=30, # Rate limit to avoid API throttling
max_iter=15, # Max iterations per agent
verbose=True,
step_callback=lambda step: print(f"Step: {step}"),
task_callback=lambda task: print(f"Task completed: {task.description[:50]}"),
)
try:
result = crew.kickoff()
print(f"Final output:
{result.raw}")
print(f"Token usage: {result.token_usage}")
except Exception as e:
print(f"Crew execution failed: {e}")
FAQ
How does CrewAI compare to building custom multi-agent systems from scratch?
CrewAI dramatically reduces boilerplate. Building multi-agent communication, task delegation, output passing, and memory from scratch typically requires 2000-3000 lines of orchestration code. CrewAI handles all of this in configuration. The tradeoff is flexibility: CrewAI's abstractions make it harder to implement unusual communication patterns or custom execution strategies. For standard team-based workflows (research, analysis, writing, review), CrewAI saves weeks of development time. For highly custom agent topologies, you may outgrow it.
What is the optimal number of agents in a CrewAI team?
Keep it between 2 and 5 agents for most use cases. Each agent adds latency (one full LLM call per task) and cost. More importantly, more agents means more potential for miscommunication and context loss between handoffs. The sweet spot is 3 agents: one for data gathering, one for analysis, and one for output generation. If you find yourself defining more than 5 agents, consider whether some roles can be merged or whether the workflow should be split into multiple sequential crews.
Can CrewAI agents run concurrently?
In sequential mode, agents run one at a time. In hierarchical mode, the manager can dispatch independent tasks concurrently. CrewAI also supports async execution via kickoff_async() for running multiple crews in parallel. However, individual tasks within a sequential crew always run in order because each task depends on the previous task's output.
#CrewAI #MultiAgent #AgentTeams #RoleBasedAI #Python #AIFramework #AgentOrchestration #Tutorial
Written by
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.