Skip to content
Learn Agentic AI11 min read0 views

Competitive Multi-Agent Systems: Debate, Adversarial Review, and Red Teaming

Implement competitive multi-agent patterns where agents debate, critique, and red-team each other's outputs to improve accuracy, catch errors, and stress-test AI-generated content before it reaches users.

Beyond Cooperation: When Agents Should Disagree

Most multi-agent tutorials show cooperative agents — a researcher passes findings to a writer who passes to an editor, everyone building on each other's work. But cooperation has a blind spot. Agents inherit each other's mistakes. If the researcher includes an incorrect statistic, the writer amplifies it, and the editor polishes it into convincing prose. Nobody challenges the source.

Competitive multi-agent systems introduce deliberate friction. Instead of agents always building on each other's output, they challenge, critique, and try to break it. This adversarial dynamic catches errors that cooperative systems miss.

The Debate Pattern

In the debate pattern, two agents argue opposite sides of a question, and a judge agent evaluates their arguments to reach a conclusion. This is directly inspired by how human debate improves reasoning — by forcing each side to find weaknesses in the other's position.

from agents import Agent, Runner, function_tool, handoff
from dataclasses import dataclass, field

@dataclass
class DebateContext:
    topic: str = ""
    pro_arguments: list[str] = field(default_factory=list)
    con_arguments: list[str] = field(default_factory=list)
    judge_verdict: str = ""
    current_round: int = 0

from agents import RunContextWrapper

@function_tool
def submit_pro_argument(
    ctx: RunContextWrapper[DebateContext],
    argument: str,
) -> str:
    """Submit an argument in favor of the proposition."""
    ctx.context.pro_arguments.append(argument)
    return f"Pro argument {len(ctx.context.pro_arguments)} recorded"

@function_tool
def submit_con_argument(
    ctx: RunContextWrapper[DebateContext],
    argument: str,
) -> str:
    """Submit an argument against the proposition."""
    ctx.context.con_arguments.append(argument)
    return f"Con argument {len(ctx.context.con_arguments)} recorded"

@function_tool
def read_debate_state(
    ctx: RunContextWrapper[DebateContext],
) -> str:
    """Read all arguments submitted so far."""
    pros = "\n".join(f"  PRO: {a}" for a in ctx.context.pro_arguments) or "  None"
    cons = "\n".join(f"  CON: {a}" for a in ctx.context.con_arguments) or "  None"
    return f"Topic: {ctx.context.topic}\nRound: {ctx.context.current_round}\n\nFor:\n{pros}\n\nAgainst:\n{cons}"

@function_tool
def submit_verdict(
    ctx: RunContextWrapper[DebateContext],
    verdict: str,
) -> str:
    """Submit the judge's final verdict."""
    ctx.context.judge_verdict = verdict
    return f"Verdict recorded: {verdict[:100]}..."

pro_debater = Agent(
    name="Pro Debater",
    instructions="""You argue IN FAVOR of the topic. Read the current
    debate state, then submit a strong argument. Address any opposing
    arguments directly. Be specific, cite reasoning, and avoid
    generalizations.""",
    tools=[read_debate_state, submit_pro_argument],
)

con_debater = Agent(
    name="Con Debater",
    instructions="""You argue AGAINST the topic. Read the current
    debate state, then submit a strong counterargument. Directly
    challenge the pro side's weakest points. Be specific and rigorous.""",
    tools=[read_debate_state, submit_con_argument],
)

judge = Agent(
    name="Judge",
    instructions="""You evaluate the debate. Read all arguments from
    both sides. Assess argument quality, evidence, and logical
    soundness. Submit a verdict that includes:
    1. Which side made stronger arguments and why
    2. The strongest single argument from each side
    3. Your balanced conclusion on the topic""",
    tools=[read_debate_state, submit_verdict],
)

An orchestrator runs the debate by alternating between the pro and con debaters for several rounds, then handing off to the judge. The adversarial structure forces each debater to address the other's strongest points rather than simply presenting one-sided analysis.

The Critique Agent Pattern

A simpler competitive pattern places a critic after a producer. The producer generates content, the critic identifies problems, and the producer revises:

producer = Agent(
    name="Content Producer",
    instructions="""Write high-quality content on the given topic.
    If you receive critique feedback in the conversation, revise your
    content to address every specific point raised.""",
)

critic = Agent(
    name="Content Critic",
    instructions="""You are a rigorous critic. Review the content just
    produced and identify:
    1. Factual claims that are unsupported or potentially wrong
    2. Logical gaps or non-sequiturs
    3. Missing perspectives or counterarguments
    4. Vague statements that should be more specific

    Be harsh but constructive. List every issue you find, with specific
    quotes from the content. Do NOT praise — only identify problems.""",
)

editorial_orchestrator = Agent(
    name="Editorial Orchestrator",
    instructions="""Manage the content production process:
    1. Hand off to Content Producer to create the initial draft
    2. Hand off to Content Critic to review
    3. If the critic found significant issues, hand back to Content
       Producer for revision
    4. After revision, deliver the final content

    Maximum 2 revision rounds.""",
    handoffs=[handoff(producer), handoff(critic)],
)

The key instruction for the critic is "Do NOT praise." Without this, the model's default helpfulness kicks in and it softens criticism, defeating the purpose of the adversarial pattern.

Red Teaming with Adversarial Agents

Red teaming uses an agent that actively tries to break or exploit another agent's output. This is invaluable for testing AI safety, prompt injection resistance, and content quality:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from agents import Agent, function_tool, RunContextWrapper
from dataclasses import dataclass, field

@dataclass
class RedTeamContext:
    original_output: str = ""
    attack_results: list[dict] = field(default_factory=list)
    vulnerabilities_found: int = 0

@function_tool
def record_attack_result(
    ctx: RunContextWrapper[RedTeamContext],
    attack_type: str,
    attack_input: str,
    result: str,
    vulnerability_found: bool,
) -> str:
    """Record the result of a red team attack."""
    ctx.context.attack_results.append({
        "type": attack_type,
        "input": attack_input,
        "result": result,
        "vulnerable": vulnerability_found,
    })
    if vulnerability_found:
        ctx.context.vulnerabilities_found += 1
    status = "VULNERABLE" if vulnerability_found else "SECURE"
    return f"Attack '{attack_type}': {status}"

red_team_agent = Agent(
    name="Red Team Agent",
    instructions="""You are a security red teamer. Your job is to find
    weaknesses in AI-generated content and agent behaviors. For each
    piece of content, attempt these attacks:

    1. FACTUAL MANIPULATION: Can the content be misquoted or taken out
       of context to support false claims?
    2. BIAS DETECTION: Does the content show unacknowledged bias toward
       a particular viewpoint?
    3. EDGE CASE FAILURE: What inputs or follow-up questions would make
       the content incorrect or harmful?
    4. HALLUCINATION CHECK: Are there specific claims that cannot be
       verified or are likely fabricated?

    Record each attack result using the tool.""",
    tools=[record_attack_result],
)

Run the red team agent against every significant output before it reaches users. The attack results log gives you a structured quality report.

Building Consensus from Disagreement

After competitive agents have debated or critiqued, you need a mechanism to reach a final answer. Three consensus strategies work well:

1. Judge-based. A separate judge agent evaluates all positions and renders a verdict. This is the cleanest approach.

2. Majority vote. Run the same task through three independent agents and take the majority answer. This works well for classification tasks.

3. Iterative convergence. Agents debate in rounds until they agree. Set a maximum round count to prevent infinite loops.

convergence_orchestrator = Agent(
    name="Convergence Orchestrator",
    instructions="""Run a debate between Pro and Con debaters. After
    each round, check if both sides agree on the core conclusion.
    If they converge, deliver the consensus. If after 3 rounds they
    have not converged, hand off to the Judge for a final ruling.

    Maximum 3 debate rounds.""",
    handoffs=[
        handoff(pro_debater),
        handoff(con_debater),
        handoff(judge),
    ],
)

FAQ

Does the adversarial approach actually improve output quality?

Yes, measurably. Research on LLM debate shows that adversarial review catches 30-60% more factual errors than single-pass generation. The improvement is most dramatic for complex, multi-step reasoning tasks where a single agent might take plausible-sounding shortcuts.

Is the debate pattern too expensive for production use?

It costs 3-5 times more than single-agent generation because you make multiple LLM calls. Use it selectively — for high-stakes outputs (legal content, medical information, financial advice) where accuracy is worth the cost. For low-risk content, a single producer-critic pass is usually sufficient.

Can I use the same model for both debaters?

Yes, and it works surprisingly well. The same model, given different instructions ("argue for" vs. "argue against"), produces genuinely different arguments. However, using different models or different temperature settings for each debater increases diversity of thought and catches more issues.


#DebatePattern #RedTeaming #AdversarialAI #MultiAgentSystems #OpenAIAgentsSDK #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.