Skip to content
Learn Agentic AI12 min read0 views

Building Composable Tool Libraries: Reusable Tools Across Multiple Agents

Learn how to build tool registries, tool factories, and shared tool modules that work across multiple agents. Covers composable design patterns, parameterized tools, dependency injection, and packaging tools for reuse.

The Reusability Problem

Most AI agent tutorials build tools inline — tightly coupled to a specific agent for a specific use case. When you build a second agent that needs the same database query tool, you copy-paste the code. By the third agent, you have three copies with slightly different bug fixes applied to each. This is the exact same problem that led to the creation of software libraries, and the solution is the same: build reusable, composable tool modules.

The Tool Interface

Start by defining a standard interface that all tools implement:

from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any

@dataclass
class ToolSchema:
    name: str
    description: str
    parameters: dict

class BaseTool(ABC):
    @abstractmethod
    def schema(self) -> ToolSchema:
        """Return the JSON Schema definition for this tool."""
        pass

    @abstractmethod
    async def execute(self, **kwargs) -> str:
        """Execute the tool with the given arguments and return a string result."""
        pass

    def to_openai_schema(self) -> dict:
        s = self.schema()
        return {
            "type": "function",
            "function": {
                "name": s.name,
                "description": s.description,
                "parameters": s.parameters,
            }
        }

Every tool is a class with a schema and an execute method. This standardization is what makes composition possible.

Building Concrete Tools

Here is the database query tool implemented against this interface:

class DatabaseQueryTool(BaseTool):
    def __init__(self, connection_string: str, allowed_tables: list[str] = None):
        self.connection_string = connection_string
        self.allowed_tables = allowed_tables
        self.pool = None

    async def connect(self):
        import asyncpg
        self.pool = await asyncpg.create_pool(self.connection_string, min_size=2, max_size=5)

    def schema(self) -> ToolSchema:
        desc = "Execute a read-only SQL SELECT query against the database."
        if self.allowed_tables:
            desc += f" Available tables: {', '.join(self.allowed_tables)}."
        return ToolSchema(
            name="query_database",
            description=desc,
            parameters={
                "type": "object",
                "properties": {
                    "sql": {"type": "string", "description": "A SQL SELECT query with LIMIT clause"},
                },
                "required": ["sql"],
            },
        )

    async def execute(self, sql: str) -> str:
        # Validation and execution logic
        if not sql.strip().upper().startswith("SELECT"):
            return "Error: Only SELECT queries allowed"
        try:
            async with self.pool.acquire() as conn:
                rows = await conn.fetch(sql)
                import json
                return json.dumps([dict(r) for r in rows], default=str, indent=2)
        except Exception as e:
            return f"Error: {str(e)}"

The constructor takes configuration (connection string, allowed tables) that customizes the tool for each use case. The schema dynamically includes the allowed tables in the description.

The Tool Registry

A registry manages tools and provides lookup functionality:

class ToolRegistry:
    def __init__(self):
        self._tools: dict[str, BaseTool] = {}

    def register(self, tool: BaseTool):
        name = tool.schema().name
        if name in self._tools:
            raise ValueError(f"Tool '{name}' already registered")
        self._tools[name] = tool

    def get(self, name: str) -> BaseTool:
        if name not in self._tools:
            raise KeyError(f"Unknown tool: {name}. Available: {list(self._tools.keys())}")
        return self._tools[name]

    def all_schemas(self) -> list[dict]:
        return [tool.to_openai_schema() for tool in self._tools.values()]

    async def execute(self, name: str, arguments: dict) -> str:
        tool = self.get(name)
        return await tool.execute(**arguments)

    def list_tools(self) -> list[str]:
        return list(self._tools.keys())

Now the agent loop is clean and decoupled from any specific tool:

async def run_agent_with_registry(
    registry: ToolRegistry,
    user_message: str,
    system_prompt: str,
) -> str:
    messages = [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_message},
    ]

    for _ in range(10):
        response = await client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
            tools=registry.all_schemas(),
        )
        msg = response.choices[0].message
        messages.append(msg)

        if not msg.tool_calls:
            return msg.content

        for tc in msg.tool_calls:
            import json
            args = json.loads(tc.function.arguments)
            result = await registry.execute(tc.function.name, args)
            messages.append({"role": "tool", "tool_call_id": tc.id, "content": result})

    return "Max iterations reached"

Tool Factories

Factories create pre-configured tool instances for common patterns:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

class ToolFactory:
    @staticmethod
    def create_db_tool(
        connection_string: str,
        allowed_tables: list[str] = None,
    ) -> DatabaseQueryTool:
        return DatabaseQueryTool(
            connection_string=connection_string,
            allowed_tables=allowed_tables,
        )

    @staticmethod
    def create_api_tool(
        name: str,
        base_url: str,
        api_key: str,
        allowed_paths: list[str],
    ) -> "APITool":
        return APITool(
            name=name,
            base_url=base_url,
            api_key=api_key,
            allowed_paths=allowed_paths,
        )

    @staticmethod
    def standard_toolset(db_url: str, api_configs: dict) -> ToolRegistry:
        """Create a registry with the standard toolset for customer support agents."""
        registry = ToolRegistry()

        registry.register(ToolFactory.create_db_tool(
            db_url,
            allowed_tables=["customers", "orders", "products"],
        ))

        for api_name, config in api_configs.items():
            registry.register(ToolFactory.create_api_tool(
                name=api_name,
                **config,
            ))

        return registry

Now spinning up a new agent with a standard toolset is a single call:

registry = ToolFactory.standard_toolset(
    db_url="postgresql://user:pass@localhost/mydb",
    api_configs={
        "slack": {"base_url": "https://slack.com/api", "api_key": "xoxb-...", "allowed_paths": ["/chat.postMessage"]},
    },
)

Parameterized Tools

Some tools share the same logic but operate on different resources. Use parameterization instead of creating separate classes:

class CRUDTool(BaseTool):
    def __init__(self, resource_name: str, table_name: str, columns: list[str], pool):
        self.resource_name = resource_name
        self.table_name = table_name
        self.columns = columns
        self.pool = pool

    def schema(self) -> ToolSchema:
        return ToolSchema(
            name=f"search_{self.resource_name}",
            description=f"Search {self.resource_name} records. Searchable columns: {', '.join(self.columns)}.",
            parameters={
                "type": "object",
                "properties": {
                    "column": {"type": "string", "enum": self.columns},
                    "value": {"type": "string", "description": "Value to search for"},
                    "limit": {"type": "integer", "default": 10, "maximum": 50},
                },
                "required": ["column", "value"],
            },
        )

    async def execute(self, column: str, value: str, limit: int = 10) -> str:
        if column not in self.columns:
            return f"Error: Invalid column. Choose from: {self.columns}"
        import json
        async with self.pool.acquire() as conn:
            rows = await conn.fetch(
                f"SELECT * FROM {self.table_name} WHERE {column} ILIKE $1 LIMIT $2",
                f"%{value}%", limit,
            )
            return json.dumps([dict(r) for r in rows], default=str, indent=2)

# One class, three tools
registry.register(CRUDTool("customers", "customers", ["name", "email", "phone"], pool))
registry.register(CRUDTool("orders", "orders", ["order_id", "status", "customer_email"], pool))
registry.register(CRUDTool("products", "products", ["name", "category", "sku"], pool))

Three fully functional search tools from one class definition, each with the correct schema and allowed columns.

Packaging Tools as Modules

Organize your tool library as a proper Python package:

# tools/__init__.py
from .base import BaseTool, ToolSchema, ToolRegistry, ToolFactory
from .database import DatabaseQueryTool
from .api import APITool
from .filesystem import FileReadTool, FileWriteTool
from .web import WebFetchTool

__all__ = [
    "BaseTool", "ToolSchema", "ToolRegistry", "ToolFactory",
    "DatabaseQueryTool", "APITool",
    "FileReadTool", "FileWriteTool",
    "WebFetchTool",
]

Each agent imports only what it needs:

from tools import ToolRegistry, DatabaseQueryTool, WebFetchTool

registry = ToolRegistry()
registry.register(DatabaseQueryTool(db_url))
registry.register(WebFetchTool(allowed_domains=["docs.python.org"]))

Tools are shared, tested once, and maintained in a single location. Bug fixes propagate to every agent that uses them.

FAQ

How do I test tools independently of the LLM?

Write unit tests that call tool.execute() directly with known inputs and assert the output. Mock external dependencies (databases, APIs) in tests. Also test the schema method to ensure it returns valid JSON Schema. You do not need an LLM to test tool execution — it is just a function.

Should tools manage their own connections or receive them via injection?

Use dependency injection. Pass database pools, HTTP clients, and API keys into the tool constructor rather than having the tool create its own connections. This makes tools testable (you can inject mock connections), configurable (different environments use different connections), and efficient (multiple tools share a single connection pool).

How do I version tools when their schemas change?

Use semantic versioning in your tool package. Breaking schema changes (renamed parameters, removed fields) are major version bumps. New optional parameters are minor versions. Bug fixes are patches. When deploying schema changes, update tool descriptions to reflect the new behavior and test that existing agent workflows still work with the updated schema.


#ToolLibraries #SoftwareArchitecture #Reusability #AIAgents #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.