Skip to content
Learn Agentic AI11 min read0 views

LangGraph Tool Nodes: Integrating Function Calling into Graph Workflows

Learn how to integrate LLM function calling into LangGraph workflows using ToolNode, tool binding, automatic tool execution, and structured error handling for reliable agent behavior.

Tools Turn Agents into Actors

An LLM that can only generate text is a reasoner. An LLM that can call tools is an actor — it can search the web, query databases, send emails, and modify external systems. LangGraph provides first-class support for tool integration through its ToolNode class, which automatically executes tool calls from LLM responses and feeds results back into the conversation.

Defining Tools

Tools in LangGraph use the LangChain tool decorator. Each tool is a Python function with a docstring that the LLM uses to understand when and how to call it:

from langchain_core.tools import tool

@tool
def search_web(query: str) -> str:
    """Search the web for current information about a topic."""
    # Real implementation would call a search API
    return f"Top results for '{query}': [simulated search results]"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression and return the result."""
    try:
        result = eval(expression)  # Use a safe evaluator in production
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a given city."""
    return f"Weather in {city}: 72F, partly cloudy"

The function signature and docstring are automatically converted into the JSON schema that the LLM sees for function calling.

Binding Tools to the LLM

Before the LLM can call tools, you must bind them to the model:

from langchain_openai import ChatOpenAI

tools = [search_web, calculate, get_weather]
llm = ChatOpenAI(model="gpt-4o-mini")
llm_with_tools = llm.bind_tools(tools)

The bind_tools method attaches the tool schemas to every LLM request. The model now knows these functions exist and can generate structured tool call requests in its responses.

Using ToolNode in the Graph

LangGraph provides a ToolNode that automatically executes tool calls found in the last AI message:

from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

def call_agent(state: AgentState) -> dict:
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}

def should_continue(state: AgentState) -> Literal["tools", "end"]:
    last = state["messages"][-1]
    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "end"

tool_node = ToolNode(tools)

builder = StateGraph(AgentState)
builder.add_node("agent", call_agent)
builder.add_node("tools", tool_node)

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue, {
    "tools": "tools",
    "end": END,
})
builder.add_edge("tools", "agent")

graph = builder.compile()

When the agent generates tool calls, the router sends execution to the ToolNode. The node looks up each tool by name, calls it with the provided arguments, wraps the results in ToolMessage objects, and returns them to the state. The edge from tools back to agent creates the agentic loop.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Running the Tool-Calling Agent

from langchain_core.messages import HumanMessage

result = graph.invoke({
    "messages": [HumanMessage(
        content="What is the weather in Tokyo and what is 42 * 17?"
    )]
})

for msg in result["messages"]:
    print(f"{msg.__class__.__name__}: {msg.content[:100]}")

The LLM may generate multiple tool calls in a single response. The ToolNode executes all of them and returns all results before the agent node runs again.

Handling Tool Errors

By default, tool exceptions propagate and crash the graph. Use handle_tool_errors=True for graceful handling:

tool_node = ToolNode(tools, handle_tool_errors=True)

With this flag, if a tool raises an exception, the error message is returned as the tool result instead of crashing. The LLM sees the error and can decide to retry with different arguments or inform the user.

For custom error handling, wrap your tool logic:

@tool
def safe_database_query(sql: str) -> str:
    """Run a read-only SQL query against the analytics database."""
    try:
        results = execute_query(sql)
        return format_results(results)
    except DatabaseError as e:
        return f"Query failed: {e}. Please check syntax and try again."
    except TimeoutError:
        return "Query timed out. Try a simpler query or add filters."

Returning error strings as tool results — rather than raising exceptions — gives the LLM the chance to self-correct, which is the hallmark of robust agentic behavior.

FAQ

Can I use tools from different providers like Tavily or Wikipedia?

Yes. Any LangChain-compatible tool works with ToolNode. The LangChain community package includes dozens of pre-built tool integrations for search engines, databases, APIs, and file systems. Just add them to the tools list.

How does the LLM decide which tool to call?

The LLM selects tools based on the function name, docstring, and parameter schema. Writing clear, specific docstrings is the most effective way to improve tool selection accuracy. Ambiguous descriptions lead to incorrect tool calls.

Can a tool call trigger another tool call?

Not directly. Tools return results to the state, then the agent node runs again and the LLM decides whether to make additional tool calls. This loop continues until the LLM generates a response without tool calls, at which point the router sends execution to the end node.


#LangGraph #ToolCalling #FunctionCalling #ToolNode #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.