Skip to content
Learn Agentic AI9 min read0 views

Memory Privacy and Isolation: Multi-User Memory Without Data Leakage

Design secure multi-user memory systems for AI agents with strict user isolation, memory partitioning, encryption at rest, and fine-grained access control to prevent data leakage.

The Multi-User Memory Risk

When an AI agent serves multiple users, its memory system becomes a potential vector for data leakage. User A asks the agent about their medical records. User B asks a general question, and the agent accidentally includes details from User A's session in its context. This is not hypothetical — it happens when memory systems lack proper isolation.

Multi-user memory requires strict partitioning, encryption, and access control. No query should ever return memories belonging to a different user, regardless of how similar the content is to the query.

User Isolation Architecture

The foundation is a namespace-per-user design. Each user's memories live in a logically separate partition. The memory store enforces partition boundaries at every access point.

from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
import hashlib
import secrets


@dataclass
class IsolatedMemory:
    content: str
    user_id: str
    created_at: datetime
    category: str = "general"
    encrypted: bool = False
    id: str = ""


class UserIsolatedMemoryStore:
    def __init__(self):
        # Memories partitioned by user_id
        self._partitions: dict[str, dict[str, IsolatedMemory]] = {}
        self._next_id = 0
        self._encryption_keys: dict[str, bytes] = {}

    def _ensure_partition(self, user_id: str):
        if user_id not in self._partitions:
            self._partitions[user_id] = {}

    def _gen_id(self) -> str:
        self._next_id += 1
        return f"mem_{self._next_id:06d}"

    def add(
        self,
        user_id: str,
        content: str,
        category: str = "general",
    ) -> str:
        self._ensure_partition(user_id)
        mem_id = self._gen_id()
        memory = IsolatedMemory(
            id=mem_id,
            content=content,
            user_id=user_id,
            created_at=datetime.now(),
            category=category,
        )
        self._partitions[user_id][mem_id] = memory
        return mem_id

    def query(
        self,
        user_id: str,
        category: str | None = None,
        keyword: str | None = None,
        top_k: int = 10,
    ) -> list[IsolatedMemory]:
        partition = self._partitions.get(user_id, {})
        results = list(partition.values())

        if category:
            results = [
                m for m in results if m.category == category
            ]
        if keyword:
            results = [
                m for m in results
                if keyword.lower() in m.content.lower()
            ]

        results.sort(key=lambda m: m.created_at, reverse=True)
        return results[:top_k]

The critical design decision here is that every method requires a user_id parameter. There is no method to query across all users. Cross-partition access is architecturally impossible through the public API.

Memory Partitioning Strategies

Beyond the logical namespace approach, you can add physical partitioning for defense in depth.

Database-level isolation uses separate database schemas or tables per user. Even a SQL injection attack cannot cross schema boundaries.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Row-level security uses a single table with a user_id column and database-enforced RLS policies. This is more storage-efficient while still preventing cross-user access at the database layer.

# Example: PostgreSQL row-level security setup
RLS_SETUP_SQL = """
-- Enable RLS on the memories table
ALTER TABLE memories ENABLE ROW LEVEL SECURITY;

-- Policy: users can only access their own rows
CREATE POLICY user_isolation ON memories
    USING (user_id = current_setting('app.current_user_id'));

-- Set user context before queries
SET app.current_user_id = 'user_123';
SELECT * FROM memories;  -- Only returns user_123's rows
"""

Encryption at Rest

Even with partitioning, an attacker who gains database access could read all memories. Encryption at rest adds another layer of protection. Each user gets a unique encryption key, and memory content is encrypted before storage.

from cryptography.fernet import Fernet


class EncryptedMemoryStore(UserIsolatedMemoryStore):
    def _get_user_key(self, user_id: str) -> Fernet:
        if user_id not in self._encryption_keys:
            key = Fernet.generate_key()
            self._encryption_keys[user_id] = key
        return Fernet(self._encryption_keys[user_id])

    def add_encrypted(
        self,
        user_id: str,
        content: str,
        category: str = "general",
    ) -> str:
        fernet = self._get_user_key(user_id)
        encrypted_content = fernet.encrypt(
            content.encode()
        ).decode()

        self._ensure_partition(user_id)
        mem_id = self._gen_id()
        memory = IsolatedMemory(
            id=mem_id,
            content=encrypted_content,
            user_id=user_id,
            created_at=datetime.now(),
            category=category,
            encrypted=True,
        )
        self._partitions[user_id][mem_id] = memory
        return mem_id

    def read_encrypted(
        self, user_id: str, mem_id: str
    ) -> str | None:
        partition = self._partitions.get(user_id, {})
        memory = partition.get(mem_id)
        if not memory:
            return None

        if memory.encrypted:
            fernet = self._get_user_key(user_id)
            return fernet.decrypt(
                memory.content.encode()
            ).decode()
        return memory.content

Access Control Layers

Fine-grained access control goes beyond user isolation. Within a user's partition, different categories of memory may have different sensitivity levels.

from enum import Enum


class SensitivityLevel(Enum):
    PUBLIC = "public"
    PRIVATE = "private"
    SENSITIVE = "sensitive"  # PII, health, financial


ACCESS_POLICIES = {
    SensitivityLevel.PUBLIC: {"agent", "admin", "export"},
    SensitivityLevel.PRIVATE: {"agent", "admin"},
    SensitivityLevel.SENSITIVE: {"admin"},
}


def check_access(
    sensitivity: SensitivityLevel,
    accessor_role: str,
) -> bool:
    allowed = ACCESS_POLICIES.get(sensitivity, set())
    return accessor_role in allowed


def query_with_access_check(
    store: UserIsolatedMemoryStore,
    user_id: str,
    accessor_role: str,
    category: str | None = None,
) -> list[IsolatedMemory]:
    all_memories = store.query(user_id, category=category)
    # Filter based on accessor's permission level
    return [
        m for m in all_memories
        if check_access(
            SensitivityLevel(
                m.category if m.category in {"public", "private", "sensitive"}
                else "private"
            ),
            accessor_role,
        )
    ]

Data Deletion and Right to Erasure

GDPR and similar regulations require the ability to delete all data for a specific user. With partitioned memory, this is straightforward — delete the entire partition.

def delete_user_data(self, user_id: str) -> int:
    partition = self._partitions.pop(user_id, {})
    self._encryption_keys.pop(user_id, None)
    return len(partition)

FAQ

What about shared memories that reference multiple users?

Shared memories should be stored in a separate, non-user-partitioned store with explicit access lists. Never store another user's data inside a user's private partition. Cross-references should use opaque identifiers, never raw content.

How do I handle vector similarity search with encrypted memories?

Encrypted content cannot be embedded or searched directly. The common approach is to store embeddings unencrypted (they do not reveal the original text) but keep the content encrypted. At retrieval time, search embeddings, then decrypt only the returned results.

Is per-user encryption key management too complex?

For production systems, use a key management service (AWS KMS, HashiCorp Vault) instead of generating keys in-process. The KMS handles key rotation, access policies, and audit logging. The code pattern stays the same — you just swap the key source.


#MemoryPrivacy #DataIsolation #MultiUser #Security #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.