Building a Composable Agent Library: Reusable Agent Components for Your Organization
Create a shared library of reusable, well-tested agent components using the OpenAI Agents SDK with factory patterns, configuration-driven agents, testing utilities, documentation standards, and semantic versioning.
The Problem: Agent Copy-Paste Culture
Every team that builds agents eventually hits the same wall. Someone copies an agent definition from one project to another. They tweak the instructions slightly. The tool definitions drift. Bug fixes in one copy never reach the others.
A composable agent library solves this by providing a shared catalog of tested, versioned, configurable agent components that any team can import and use.
Project Structure
Organize your library as a proper Python package.
agent_library/
__init__.py
version.py
core/
__init__.py
base.py # Base classes and protocols
config.py # Configuration models
registry.py # Agent registry
agents/
__init__.py
support.py # Customer support agents
research.py # Research and analysis agents
data.py # Data processing agents
tools/
__init__.py
web.py # Web scraping and API tools
database.py # Database query tools
messaging.py # Email, Slack, notification tools
testing/
__init__.py
fixtures.py # Test fixtures and mocks
assertions.py # Custom test assertions
py.typed # PEP 561 marker
Configuration-Driven Agent Factory
The factory pattern lets consumers customize agents without modifying source code.
# agent_library/core/config.py
from pydantic import BaseModel
from typing import Any
class AgentConfig(BaseModel):
"""Configuration for creating an agent instance."""
name: str
instructions_override: str | None = None
model: str = "gpt-4o"
temperature: float = 0.7
max_tokens: int = 4096
enabled_tools: list[str] | None = None # None = all tools
metadata: dict[str, Any] = {}
# agent_library/core/base.py
from abc import ABC, abstractmethod
from agents import Agent, FunctionTool
from .config import AgentConfig
class AgentComponent(ABC):
"""Base class for all library agent components."""
@abstractmethod
def default_config(self) -> AgentConfig:
"""Return the default configuration."""
...
@abstractmethod
def default_instructions(self) -> str:
"""Return the default system instructions."""
...
@abstractmethod
def available_tools(self) -> dict[str, FunctionTool]:
"""Return all available tools keyed by name."""
...
def build(self, config: AgentConfig | None = None) -> Agent:
"""Build an Agent instance from configuration."""
cfg = config or self.default_config()
all_tools = self.available_tools()
# Filter tools if specified
if cfg.enabled_tools is not None:
tools = [all_tools[name] for name in cfg.enabled_tools if name in all_tools]
else:
tools = list(all_tools.values())
return Agent(
name=cfg.name,
instructions=cfg.instructions_override or self.default_instructions(),
tools=tools,
model=cfg.model,
)
Implementing a Reusable Agent Component
Here is a support agent component that teams can configure for their product.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
# agent_library/agents/support.py
from agents import function_tool, RunContextWrapper
from ..core.base import AgentComponent, AgentConfig
class SupportAgentComponent(AgentComponent):
def __init__(self, product_name: str = "our product", knowledge_base_url: str = ""):
self.product_name = product_name
self.knowledge_base_url = knowledge_base_url
def default_config(self) -> AgentConfig:
return AgentConfig(
name="support_agent",
model="gpt-4o",
temperature=0.5,
)
def default_instructions(self) -> str:
return f"""You are a support agent for {self.product_name}.
Rules:
- Search the knowledge base before answering
- Be concise: 1-3 sentences unless detail is requested
- Escalate if the issue needs human intervention
- Never share internal system details with users"""
def available_tools(self) -> dict:
@function_tool
async def search_knowledge_base(query: str) -> str:
"""Search the product knowledge base."""
# Implementation would call actual KB API
return f"KB results for '{query}': [article_1, article_2]"
@function_tool
async def create_ticket(
subject: str, description: str, priority: str = "medium"
) -> str:
"""Create a support ticket for issues needing human follow-up."""
return f"Ticket created: {subject} (priority: {priority})"
@function_tool
async def check_account_status(account_id: str) -> str:
"""Check account status and subscription details."""
return f"Account {account_id}: Active, Pro plan, next billing 2026-04-01"
return {
"search_knowledge_base": search_knowledge_base,
"create_ticket": create_ticket,
"check_account_status": check_account_status,
}
Agent Registry: Discovering and Instantiating Components
# agent_library/core/registry.py
from typing import Type
from .base import AgentComponent, AgentConfig
from agents import Agent
class AgentRegistry:
_components: dict[str, Type[AgentComponent]] = {}
@classmethod
def register(cls, name: str):
"""Decorator to register an agent component."""
def decorator(component_class: Type[AgentComponent]):
cls._components[name] = component_class
return component_class
return decorator
@classmethod
def list_components(cls) -> list[str]:
return list(cls._components.keys())
@classmethod
def create(cls, name: str, config: AgentConfig | None = None, **kwargs) -> Agent:
if name not in cls._components:
raise ValueError(f"Unknown component: {name}. Available: {cls.list_components()}")
component = cls._components[name](**kwargs)
return component.build(config)
# Register components
@AgentRegistry.register("support")
class RegisteredSupportAgent(SupportAgentComponent):
pass
Testing Agent Components
Build testing utilities that make it easy to verify agent behavior without spending LLM tokens.
# agent_library/testing/fixtures.py
from agents import Agent, Runner
from unittest.mock import AsyncMock, patch
import pytest
class AgentTestHarness:
"""Test harness for agent components."""
def __init__(self, agent: Agent):
self.agent = agent
def assert_has_tool(self, tool_name: str):
tool_names = [t.name for t in self.agent.tools]
assert tool_name in tool_names, f"Tool '{tool_name}' not found. Available: {tool_names}"
def assert_tool_count(self, expected: int):
actual = len(self.agent.tools)
assert actual == expected, f"Expected {expected} tools, got {actual}"
def assert_instructions_contain(self, text: str):
assert text.lower() in self.agent.instructions.lower(), (
f"Instructions do not contain '{text}'"
)
async def run_with_mock_model(self, input_text: str, mock_response: str) -> str:
"""Run the agent with a mocked model response for deterministic testing."""
with patch.object(Runner, "run") as mock_run:
mock_result = AsyncMock()
mock_result.final_output = mock_response
mock_run.return_value = mock_result
result = await Runner.run(self.agent, input=input_text)
return result.final_output
# Usage in tests
def test_support_agent_has_required_tools():
component = SupportAgentComponent(product_name="TestApp")
agent = component.build()
harness = AgentTestHarness(agent)
harness.assert_has_tool("search_knowledge_base")
harness.assert_has_tool("create_ticket")
harness.assert_tool_count(3)
harness.assert_instructions_contain("TestApp")
def test_support_agent_tool_filtering():
component = SupportAgentComponent(product_name="TestApp")
config = AgentConfig(
name="limited_support",
enabled_tools=["search_knowledge_base"],
)
agent = component.build(config)
harness = AgentTestHarness(agent)
harness.assert_tool_count(1)
harness.assert_has_tool("search_knowledge_base")
Versioning and Publishing
Use semantic versioning and publish as an internal Python package.
# agent_library/version.py
__version__ = "2.1.0"
# In pyproject.toml
# [project]
# name = "company-agent-library"
# version = "2.1.0"
# requires-python = ">=3.10"
# dependencies = ["openai-agents>=0.1.0", "pydantic>=2.0"]
Consumer Usage
Teams consume the library as a dependency.
from agent_library import AgentRegistry
from agent_library.core.config import AgentConfig
# Quick start with defaults
agent = AgentRegistry.create("support", product_name="Acme CRM")
# Customized
agent = AgentRegistry.create(
"support",
config=AgentConfig(
name="acme_support",
model="gpt-4o-mini",
enabled_tools=["search_knowledge_base", "create_ticket"],
temperature=0.3,
),
product_name="Acme CRM",
knowledge_base_url="https://kb.acme.com/api",
)
FAQ
How do I handle breaking changes when updating agent instructions?
Treat instruction changes like API changes. Minor wording tweaks are patch versions. Adding new tool requirements or changing behavior expectations is a minor version. Removing tools or fundamentally changing the agent's role is a major version. Document changes in a CHANGELOG and give consumers time to migrate.
Should each team fork the library or extend it?
Extend, not fork. The library provides base components. Teams customize through configuration and the instructions_override field. If a team needs genuinely different behavior, they should contribute a new component to the library rather than forking an existing one — this prevents drift and keeps the organization's agent capabilities unified.
How do I test that an agent component works correctly with a live LLM?
Keep two test suites: unit tests using the mock harness (fast, free, run on every commit) and integration tests that call the real LLM (slower, costs money, run nightly or on release). Integration tests should verify that the agent uses the right tools for given inputs and produces responses that match expected patterns, not exact strings.
#OpenAIAgentsSDK #AgentLibrary #SoftwareArchitecture #Reusability #Testing #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.