Skip to content
Learn Agentic AI12 min read0 views

Shared State in Multi-Agent Systems: Coordinating Data Between Agents

Master shared state management in multi-agent systems using the OpenAI Agents SDK's RunContext, including shared context objects, state mutation patterns, race conditions, and consistency strategies.

The State Problem in Multi-Agent Systems

When a single agent handles a conversation, state management is straightforward — everything lives in the conversation history. But when multiple agents collaborate, they often need to share data that does not belong in the conversation. A customer ID looked up by the triage agent, a shopping cart being built by the product agent, authentication status verified by the auth agent — this operational state must flow between agents without being lost during handoffs.

The conversation history carries the dialogue, but structured data like user profiles, accumulated results, and workflow progress needs a different mechanism. This is where shared state comes in.

RunContext: The SDK's Shared State Mechanism

The OpenAI Agents SDK provides RunContext — a typed context object that is available to all agents and tools within a single run. You define a context class, pass an instance to the Runner, and every tool function can access and modify it:

from dataclasses import dataclass, field
from agents import Agent, Runner, RunContextWrapper, function_tool

@dataclass
class CustomerContext:
    customer_id: str = ""
    customer_name: str = ""
    subscription_tier: str = ""
    interaction_notes: list[str] = field(default_factory=list)

@function_tool
def lookup_customer(
    ctx: RunContextWrapper[CustomerContext],
    email: str,
) -> str:
    """Look up a customer by email and store their info in context."""
    # Simulate database lookup
    ctx.context.customer_id = "cust_12345"
    ctx.context.customer_name = "Alice Johnson"
    ctx.context.subscription_tier = "enterprise"
    return f"Found customer: Alice Johnson (Enterprise)"

@function_tool
def add_interaction_note(
    ctx: RunContextWrapper[CustomerContext],
    note: str,
) -> str:
    """Add a note about the current interaction."""
    ctx.context.interaction_notes.append(note)
    return f"Note added. Total notes: {len(ctx.context.interaction_notes)}"

@function_tool
def get_customer_summary(
    ctx: RunContextWrapper[CustomerContext],
) -> str:
    """Return a summary of the current customer context."""
    c = ctx.context
    notes = "; ".join(c.interaction_notes) if c.interaction_notes else "None"
    return f"Customer: {c.customer_name} | Tier: {c.subscription_tier} | Notes: {notes}"

Now multiple agents can share this context:

auth_agent = Agent(
    name="Auth Agent",
    instructions="Look up the customer by email before proceeding.",
    tools=[lookup_customer],
)

support_agent = Agent(
    name="Support Agent",
    instructions="""Help the customer with their issue. Use
    get_customer_summary to understand who you are helping.
    Add interaction notes as you work.""",
    tools=[get_customer_summary, add_interaction_note],
)

When the auth agent calls lookup_customer, it populates the shared context. When the support agent later calls get_customer_summary, it reads the same context object and sees the data the auth agent stored.

Running with Context

Pass the context instance when starting the run:

from agents import Runner

context = CustomerContext()

result = Runner.run_sync(
    auth_agent,
    "My email is alice@example.com and my login is broken",
    context=context,
)

# After the run, context has been populated
print(context.customer_id)       # "cust_12345"
print(context.interaction_notes)  # ["...notes from the support agent..."]

The context is mutable and persists throughout the entire run, across all agent handoffs. This means data set by the first agent is available to the fifth agent without any explicit passing.

Designing Your Context Object

A well-designed context object serves as the "shared memory" for the agent team. Here are principles for structuring it:

Group by domain, not by agent. Do not create auth_agent_data and support_agent_data fields. Instead, model the domain: customer, order, interaction. Any agent that needs customer data reads from the same customer field.

Use typed fields, not dictionaries. A dataclass with explicit fields is self-documenting and catches errors at development time. Avoid metadata: dict catch-all fields.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Track workflow state explicitly. If your multi-agent workflow has phases, track the current phase in the context:

@dataclass
class WorkflowContext:
    phase: str = "intake"  # intake -> research -> resolution -> closure
    customer_id: str = ""
    issue_category: str = ""
    research_findings: list[str] = field(default_factory=list)
    resolution_applied: str = ""
    satisfaction_score: int = 0

Agents can check the phase before acting. The research agent verifies that phase == "research" before proceeding. This prevents agents from acting out of order.

Handling Concurrent Access

In a synchronous single-run scenario, race conditions are not a concern because only one agent is active at a time. But if you build a system where multiple agents process different parts of a request concurrently (using asyncio or parallel tool calls), concurrent writes to the shared context can cause problems.

The safest pattern is to give each concurrent agent its own section of the context:

@dataclass
class ParallelResearchContext:
    # Each researcher writes to its own field
    web_findings: str = ""
    database_findings: str = ""
    api_findings: str = ""

    # Only the orchestrator writes to the final report
    final_report: str = ""

This eliminates write conflicts because no two agents write to the same field. The orchestrator reads all fields after the parallel phase completes and writes the final report.

For scenarios where concurrent writes to the same field are unavoidable, use a thread-safe structure:

import threading
from dataclasses import dataclass, field

@dataclass
class ThreadSafeContext:
    _lock: threading.Lock = field(default_factory=threading.Lock)
    _findings: list[str] = field(default_factory=list)

    def add_finding(self, finding: str):
        with self._lock:
            self._findings.append(finding)

    def get_findings(self) -> list[str]:
        with self._lock:
            return list(self._findings)

Context vs. Conversation History

A common mistake is to store everything in the conversation history by having agents emit verbose messages. This wastes context window tokens and creates noise. Use context for structured operational data and the conversation history for the dialogue:

Data Type Store In
Customer ID, name, tier RunContext
Shopping cart items RunContext
Workflow phase RunContext
What the user said Conversation history
Agent explanations to user Conversation history
Tool call results (visible) Conversation history

Persisting Context Beyond a Single Run

RunContext lives for the duration of a single Runner.run() call. If your application spans multiple runs (for example, a chat session with multiple user messages), you need to persist the context between runs:

import json

def save_context(context: CustomerContext) -> str:
    return json.dumps({
        "customer_id": context.customer_id,
        "customer_name": context.customer_name,
        "subscription_tier": context.subscription_tier,
        "interaction_notes": context.interaction_notes,
    })

def load_context(data: str) -> CustomerContext:
    d = json.loads(data)
    return CustomerContext(**d)

Store the serialized context in your session store (Redis, database, or in-memory cache) and reload it for each subsequent run.

FAQ

Can different agents see different parts of the context?

The SDK gives all agents and tools access to the full RunContext object. If you need to restrict access, implement it at the tool level — only provide certain tools to certain agents, and only those tools read/write specific context fields.

What is the maximum size for a RunContext object?

There is no hard limit imposed by the SDK. The context is a Python object in memory, so the limit is your server's RAM. However, keep the context lean. If you are storing megabytes of data in the context, you should be storing it in a database and keeping only references in the context.

Should I pass context through tool outputs or through RunContext?

Use RunContext for structured data that multiple agents need across the workflow. Use tool outputs for data that only the current agent needs to see in the conversation. If in doubt, ask: "Will another agent need this data later?" If yes, put it in RunContext.


#SharedState #MultiAgentSystems #OpenAIAgentsSDK #RunContext #StateManagement #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.