Skip to content
Learn Agentic AI13 min read0 views

Building a Research Crew: Multi-Agent Team for Market Analysis

Build a complete CrewAI multi-agent team with researcher, analyst, and writer agents that collaborate through a task pipeline to produce a comprehensive market analysis report.

From Theory to a Working Product

The previous posts in this series covered CrewAI's components individually — agents, tasks, tools, memory, and process types. Now it is time to combine everything into a complete, working application: a multi-agent research crew that performs market analysis.

This crew takes a market topic as input and produces a structured report with research findings, competitive analysis, and strategic recommendations. It demonstrates agent specialization, tool usage, context chaining, and output quality techniques.

Architecture Overview

The crew consists of three specialized agents organized in a sequential pipeline:

  1. Market Researcher — Gathers raw data from web searches and sources
  2. Competitive Analyst — Analyzes the data, identifies patterns, and scores competitors
  3. Report Writer — Synthesizes everything into a polished, executive-ready report

Each agent has distinct tools, goals, and backstories that make it genuinely specialized rather than a generic LLM wrapper.

Setting Up the Environment

pip install crewai crewai-tools
export OPENAI_API_KEY="sk-your-key"
export SERPER_API_KEY="your-serper-key"

Defining the Agents

from crewai import Agent, LLM
from crewai_tools import SerperDevTool, ScrapeWebsiteTool

search_tool = SerperDevTool()
scrape_tool = ScrapeWebsiteTool()

researcher = Agent(
    role="Senior Market Researcher",
    goal="""Gather comprehensive, up-to-date market data including
    market size, growth rates, key players, and emerging trends.
    Always cite sources and distinguish facts from estimates.""",
    backstory="""You are a senior researcher at Gartner with 12 years
    of experience covering technology markets. You are meticulous about
    data accuracy and always cross-reference multiple sources. You know
    how to find information in earnings reports, industry publications,
    and analyst briefings.""",
    tools=[search_tool, scrape_tool],
    verbose=True,
    max_iter=15,
)

analyst = Agent(
    role="Competitive Intelligence Analyst",
    goal="""Analyze market data to identify competitive dynamics,
    strengths and weaknesses of key players, market gaps, and
    strategic opportunities. Produce quantitative assessments
    wherever possible.""",
    backstory="""You are a former McKinsey associate who transitioned
    to competitive intelligence. You think in frameworks — Porter's
    Five Forces, SWOT, value chain analysis. You are skeptical of
    surface-level analysis and always dig deeper into the 'why'
    behind market movements.""",
    verbose=True,
)

writer = Agent(
    role="Executive Report Writer",
    goal="""Transform research and analysis into a polished,
    executive-ready report that is clear, actionable, and
    well-structured. Use data to support every claim.""",
    backstory="""You are a communications director who has written
    board-level reports for Fortune 500 companies. You know that
    executives have limited time, so you lead with insights, support
    with data, and end with clear recommendations.""",
    verbose=True,
)

Notice the specificity in each agent's definition. The researcher is methodical and source-focused. The analyst thinks in frameworks. The writer is executive-oriented. These distinct personas produce genuinely different outputs.

Defining the Task Pipeline

from crewai import Task

research_task = Task(
    description="""Research the {market} market comprehensively. Cover:
    1. Current market size (2025-2026 estimates)
    2. Projected growth rate (CAGR) through 2030
    3. Top 5 companies by market share
    4. 3 emerging trends reshaping the market
    5. Key risks or headwinds facing the market

    Use web search to find recent data. Cross-reference at least
    2 sources for market size figures.""",
    expected_output="""A structured research document with sections for
    market size, growth projections, competitive landscape (table format),
    trends (numbered list with explanations), and risks. Include source
    URLs where available.""",
    agent=researcher,
)

analysis_task = Task(
    description="""Using the research data, perform a competitive analysis:
    1. Rank the top 5 players by competitive strength (1-10 scale)
    2. Identify each player's primary competitive advantage
    3. Identify 2 underserved market segments
    4. Assess barriers to entry for new competitors
    5. Provide a SWOT analysis for a hypothetical new entrant""",
    expected_output="""A competitive analysis report containing:
    - Competitive ranking table (company, score, rationale)
    - Market gap analysis (2 gaps with size estimates)
    - Barriers to entry assessment (high/medium/low with explanation)
    - New entrant SWOT analysis in quadrant format""",
    agent=analyst,
    context=[research_task],
)

report_task = Task(
    description="""Write an executive market analysis report combining
    the research and competitive analysis. The report should:
    1. Open with a 3-sentence executive summary
    2. Present key market data with supporting figures
    3. Include the competitive landscape analysis
    4. Identify the top 3 strategic opportunities
    5. Close with actionable recommendations

    Write for a C-suite audience. Use data from the research
    and analysis — do not make up statistics.""",
    expected_output="""A 800-1000 word executive report with clear
    sections: Executive Summary, Market Overview, Competitive Landscape,
    Strategic Opportunities, and Recommendations. Professional tone,
    data-driven, with specific numbers and actionable next steps.""",
    agent=writer,
    context=[research_task, analysis_task],
)

The explicit context parameter on the report task ensures the writer receives both the raw research and the competitive analysis, not just the immediately preceding task output.

Assembling and Running the Crew

from crewai import Crew, Process

market_research_crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, report_task],
    process=Process.sequential,
    memory=True,
    verbose=True,
)

result = market_research_crew.kickoff(
    inputs={"market": "AI-powered customer service automation"}
)

print("=" * 60)
print("FINAL REPORT")
print("=" * 60)
print(result.raw)

The {market} placeholder in the research task description is replaced at runtime. This makes the crew reusable for any market topic.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Improving Output Quality

Three techniques significantly improve the quality of crew output:

Technique 1: Guardrails in expected_output

analysis_task = Task(
    description="Analyze the competitive landscape.",
    expected_output="""Provide scores on a 1-10 scale. A score of 10 means
    market dominance with no significant vulnerabilities. A score below 5
    means the company faces existential threats. Justify every score with
    at least one specific data point from the research.""",
    agent=analyst,
)

Technique 2: Output validation with Pydantic

from pydantic import BaseModel
from typing import List

class MarketReport(BaseModel):
    executive_summary: str
    market_size_usd: str
    growth_rate: str
    top_players: List[str]
    recommendations: List[str]

report_task = Task(
    description="Write the market analysis report.",
    expected_output="A structured market report.",
    agent=writer,
    output_pydantic=MarketReport,
)

Technique 3: Enable memory for iterative improvement

Running the crew multiple times with memory enabled lets agents build on past research, producing progressively better reports.

FAQ

How long does a full crew execution take?

A three-agent sequential crew with web search typically takes 2 to 5 minutes, depending on the number of search queries and the complexity of analysis. The researcher usually takes the longest because of tool calls. Budget 10 to 15 LLM calls total.

How do I save the output to a file?

After kickoff(), write the result to disk. You can also use the output_file parameter on the final task: Task(..., output_file="report.md"). CrewAI writes the task output directly to the specified file path.

Can I add a review or editing step?

Yes. Add a fourth agent with a "Quality Reviewer" role and a task that takes the report as context and returns feedback or a revised version. This adds cost but catches errors, inconsistencies, and quality issues that a single pass might miss.


#CrewAI #MarketAnalysis #MultiAgent #Project #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.