Skip to content
Learn Agentic AI13 min read0 views

Agent Memory Sharing Strategies: Blackboard, Message Passing, and Shared Vector Stores

Compare three fundamental memory sharing architectures for multi-agent systems — blackboard, message passing, and shared vector stores — with implementation patterns, consistency considerations, and performance tradeoffs.

The Memory Sharing Problem

When multiple agents work together, they need to share information — intermediate results, discovered facts, decisions made, and context about the current task. How you architect this shared memory determines your system's consistency, performance, and scalability.

Three dominant patterns have emerged: the blackboard architecture (shared mutable state), message passing (explicit communication), and shared vector stores (semantic memory). Each makes different tradeoffs, and understanding when to use which pattern is critical for building reliable multi-agent systems.

Pattern 1: Blackboard Architecture

The blackboard is a shared workspace where agents read and write structured data. It originates from the Hearsay-II speech understanding system in the 1970s and remains one of the most practical patterns for collaborative problem-solving.

import asyncio
from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional
import time

@dataclass
class BlackboardEntry:
    key: str
    value: Any
    written_by: str
    timestamp: float = field(default_factory=time.time)
    confidence: float = 1.0

class Blackboard:
    def __init__(self):
        self._data: Dict[str, BlackboardEntry] = {}
        self._subscribers: Dict[str, List[Callable]] = {}
        self._lock = asyncio.Lock()
        self._history: List[BlackboardEntry] = []

    async def write(
        self,
        key: str,
        value: Any,
        agent_id: str,
        confidence: float = 1.0,
    ):
        async with self._lock:
            entry = BlackboardEntry(
                key=key,
                value=value,
                written_by=agent_id,
                confidence=confidence,
            )
            self._data[key] = entry
            self._history.append(entry)

        # Notify subscribers outside the lock
        for callback in self._subscribers.get(key, []):
            await callback(entry)

    async def read(self, key: str) -> Optional[BlackboardEntry]:
        async with self._lock:
            return self._data.get(key)

    async def query(self, prefix: str) -> List[BlackboardEntry]:
        async with self._lock:
            return [
                entry for key, entry in self._data.items()
                if key.startswith(prefix)
            ]

    def subscribe(self, key: str, callback: Callable):
        if key not in self._subscribers:
            self._subscribers[key] = []
        self._subscribers[key].append(callback)

When to use: When agents need real-time access to a shared problem state, when the number of agents is moderate (under 20), and when you want simple read/write semantics.

Tradeoff: Easy to implement but creates tight coupling. All agents must agree on key naming conventions and data formats. Concurrent writes to the same key require careful conflict resolution.

Pattern 2: Message Passing

In message passing, agents communicate exclusively through explicit messages. There is no shared state — each agent maintains its own local memory and shares information by sending messages to specific agents or broadcasting to channels.

from collections import defaultdict
from typing import Set

@dataclass
class Message:
    sender: str
    content: Any
    channel: str = "default"
    msg_id: str = field(default_factory=lambda: str(uuid.uuid4()))
    timestamp: float = field(default_factory=time.time)

import uuid

class MessageBroker:
    def __init__(self):
        self._queues: Dict[str, asyncio.Queue] = {}
        self._channels: Dict[str, Set[str]] = defaultdict(set)

    def register(self, agent_id: str):
        self._queues[agent_id] = asyncio.Queue()

    def subscribe_channel(self, agent_id: str, channel: str):
        self._channels[channel].add(agent_id)

    async def send_direct(self, message: Message, recipient: str):
        queue = self._queues.get(recipient)
        if queue:
            await queue.put(message)

    async def broadcast(self, message: Message):
        subscribers = self._channels.get(message.channel, set())
        for agent_id in subscribers:
            queue = self._queues.get(agent_id)
            if queue:
                await queue.put(message)

    async def receive(
        self, agent_id: str, timeout: float = 5.0
    ) -> Optional[Message]:
        queue = self._queues.get(agent_id)
        if not queue:
            return None
        try:
            return await asyncio.wait_for(queue.get(), timeout)
        except asyncio.TimeoutError:
            return None

When to use: When agents are loosely coupled, when you need audit trails of all communication, or when agents may run on different machines.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Tradeoff: No shared state means no consistency issues, but agents must explicitly request information they need. This increases message volume and latency for queries that would be instant on a blackboard.

Pattern 3: Shared Vector Store

A shared vector store gives agents semantic memory — they can store and retrieve information based on meaning rather than exact keys. This is especially powerful when agents produce unstructured knowledge (research findings, conversation summaries, analysis results).

from typing import Tuple
import numpy as np

class SharedVectorMemory:
    def __init__(self, embedding_dim: int = 1536):
        self.embedding_dim = embedding_dim
        self._entries: List[Dict] = []
        self._embeddings: List[np.ndarray] = []
        self._lock = asyncio.Lock()

    async def store(
        self,
        text: str,
        embedding: np.ndarray,
        agent_id: str,
        metadata: Optional[Dict] = None,
    ):
        async with self._lock:
            self._entries.append({
                "text": text,
                "agent_id": agent_id,
                "metadata": metadata or {},
                "timestamp": time.time(),
            })
            self._embeddings.append(embedding)

    async def search(
        self,
        query_embedding: np.ndarray,
        top_k: int = 5,
        agent_filter: Optional[str] = None,
    ) -> List[Tuple[Dict, float]]:
        async with self._lock:
            if not self._embeddings:
                return []

            matrix = np.array(self._embeddings)
            similarities = np.dot(matrix, query_embedding) / (
                np.linalg.norm(matrix, axis=1) * np.linalg.norm(query_embedding)
            )

            results = []
            for idx in np.argsort(similarities)[::-1]:
                entry = self._entries[idx]
                if agent_filter and entry["agent_id"] != agent_filter:
                    continue
                results.append((entry, float(similarities[idx])))
                if len(results) >= top_k:
                    break

            return results

When to use: When agents produce unstructured knowledge, when you need fuzzy retrieval (finding related information rather than exact lookups), or when building research and analysis systems.

Tradeoff: Higher latency than blackboard reads due to embedding computation and similarity search. Requires an embedding model. Results are approximate — you may miss relevant entries or surface irrelevant ones.

Choosing the Right Pattern

Criteria Blackboard Message Passing Vector Store
Latency Low (direct read) Medium (async) Higher (similarity search)
Consistency Needs locking No shared state Eventually consistent
Scalability Moderate High High
Query type Exact key Direct/broadcast Semantic similarity
Best for Structured collaboration Decoupled agents Knowledge retrieval

In practice, production systems often combine patterns. Use a blackboard for structured task state, message passing for coordination signals, and a vector store for accumulated knowledge.

FAQ

Can I use Redis as a blackboard?

Yes, Redis is an excellent backing store for a blackboard. Use Redis hashes for structured entries, pub/sub for subscriber notifications, and sorted sets for time-ordered history. Redis also gives you atomic operations (SETNX, WATCH/MULTI) for conflict resolution on concurrent writes.

How do I handle stale data in shared memory?

Add TTL (time-to-live) to every entry. For blackboards, agents should check the timestamp before trusting a value. For vector stores, include a recency bias in your similarity scoring — multiply the cosine similarity by a time-decay factor. For message passing, staleness is not an issue since each message is consumed once.

Should agents have private memory in addition to shared memory?

Always. Agents should maintain a private working memory for in-progress reasoning, intermediate calculations, and agent-specific context. Only publish to shared memory when you have a result, decision, or fact that other agents need. This reduces noise and contention in the shared space.


#AgentMemory #BlackboardArchitecture #VectorStores #MessagePassing #MultiAgentSystems #AgenticAI #PythonAI #SharedMemory

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.