LangChain Tool Creation: @tool Decorator, StructuredTool, and Custom Tools
Master LangChain tool creation patterns including the @tool decorator, StructuredTool class, Pydantic input schemas, async tools, and error handling for production-grade agent tools.
Tools Are How Agents Interact with the World
An LLM can reason and generate text, but it cannot look up a database, call an API, or read a file on its own. Tools bridge this gap. When you give an agent tools, the LLM can decide to invoke a function, receive its result, and incorporate that information into its reasoning. The quality of your tool definitions — names, descriptions, and input schemas — directly determines how reliably your agent uses them.
The @tool Decorator
The simplest way to create a LangChain tool is the @tool decorator. It extracts the function name, docstring, and type annotations automatically.
from langchain_core.tools import tool
@tool
def search_database(query: str, limit: int = 10) -> str:
"""Search the product database for items matching the query.
Args:
query: The search terms to look for.
limit: Maximum number of results to return.
"""
# Implementation here
results = db.search(query, limit=limit)
return f"Found {len(results)} products: {results}"
The docstring is critical — the LLM reads it to decide when and how to use the tool. Include what the tool does and what each parameter means. Type annotations define the input schema that the LLM must follow.
You can customize the name and control whether the result is returned directly to the user:
@tool("product_search", return_direct=True)
def search_database(query: str) -> str:
"""Search for products by name or category."""
return do_search(query)
Setting return_direct=True means the tool's output is returned as the final answer without further LLM processing. This is useful for tools that produce user-facing output.
Pydantic Input Schemas
For more complex inputs, define a Pydantic model as the input schema. This gives you validation, default values, and detailed field descriptions.
from langchain_core.tools import tool
from pydantic import BaseModel, Field
class FlightSearchInput(BaseModel):
origin: str = Field(description="Airport code of departure city (e.g., SFO)")
destination: str = Field(description="Airport code of arrival city (e.g., JFK)")
date: str = Field(description="Travel date in YYYY-MM-DD format")
max_stops: int = Field(default=1, description="Maximum number of stops allowed")
@tool("search_flights", args_schema=FlightSearchInput)
def search_flights(
origin: str, destination: str, date: str, max_stops: int = 1
) -> str:
"""Search for available flights between two airports on a given date."""
flights = flight_api.search(origin, destination, date, max_stops)
return format_flight_results(flights)
The Field(description=...) values are included in the tool schema that the LLM sees, so write them to be informative.
StructuredTool: Programmatic Tool Creation
When you need to build tools dynamically or from configuration, use StructuredTool.from_function.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
from langchain_core.tools import StructuredTool
from pydantic import BaseModel, Field
class CalculatorInput(BaseModel):
expression: str = Field(description="A mathematical expression to evaluate")
def calculate(expression: str) -> str:
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
calculator_tool = StructuredTool.from_function(
func=calculate,
name="calculator",
description="Evaluate mathematical expressions using Python syntax.",
args_schema=CalculatorInput,
)
This approach is equivalent to the @tool decorator but gives you programmatic control over every attribute.
Async Tools
For tools that call external APIs, use async implementations to avoid blocking.
from langchain_core.tools import tool
import httpx
@tool
async def fetch_weather(city: str) -> str:
"""Get the current weather for a city."""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.example.com/current?city={city}"
)
data = response.json()
return f"{city}: {data['temp']}F, {data['condition']}"
When an agent calls this tool during an async execution (via ainvoke), the async version is used automatically. You can also provide both sync and async implementations:
calculator_tool = StructuredTool.from_function(
func=calculate_sync,
coroutine=calculate_async,
name="calculator",
description="Evaluate math expressions.",
)
Error Handling in Tools
Agents are more robust when tools handle errors gracefully instead of throwing exceptions.
@tool
def query_database(sql: str) -> str:
"""Execute a read-only SQL query against the analytics database."""
if not sql.strip().upper().startswith("SELECT"):
return "Error: Only SELECT queries are allowed."
try:
results = db.execute(sql)
return format_results(results)
except Exception as e:
return f"Query failed: {str(e)}. Please check the syntax."
Returning error messages as strings lets the agent see what went wrong and adjust its approach. If you raise an exception instead, the agent loop may terminate or retry blindly.
You can also set handle_tool_error=True on the tool or AgentExecutor to automatically catch exceptions and convert them to error messages for the agent.
Building a Tool Registry
For agents with many tools, organize them into a registry pattern.
from langchain_core.tools import tool
def build_tools(config: dict) -> list:
tools = []
if config.get("enable_search"):
@tool
def web_search(query: str) -> str:
"""Search the web for information."""
return search_api.query(query)
tools.append(web_search)
if config.get("enable_database"):
@tool
def sql_query(query: str) -> str:
"""Query the database."""
return db.execute(query)
tools.append(sql_query)
return tools
# Feature-flag tools per deployment
tools = build_tools({"enable_search": True, "enable_database": False})
FAQ
How many tools can I give an agent?
There is no hard limit, but more tools mean a larger system prompt and more decisions for the LLM. In practice, agents work best with 5-15 well-defined tools. If you have more, consider using a tool selector or organizing tools into groups that are loaded based on the conversation context.
Should tool descriptions be short or detailed?
Detailed but concise. The LLM uses the description to decide when a tool is appropriate and how to call it. Include what the tool does, what inputs it expects, and any constraints. Avoid vague descriptions like "A useful tool" — be specific about the use case.
How do I test LangChain tools in isolation?
Call the tool directly using tool.invoke({"param": "value"}) or await tool.ainvoke({"param": "value"}). This runs the underlying function with schema validation. Write unit tests that call tools directly before integrating them into an agent.
#LangChain #ToolCreation #AIAgents #Pydantic #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.