Skip to content
Learn Agentic AI11 min read0 views

Conditional Routing in LangGraph: Building Decision Points in Agent Workflows

Build intelligent decision points in LangGraph using conditional edges, router functions, and multi-path branching to create agents that dynamically choose their execution path.

Beyond Linear Workflows

A linear chain of nodes — A then B then C — can only model the simplest workflows. Real agent systems need to make decisions: should the agent search the web or query a database? Should it ask for clarification or proceed with the answer? Should it loop back and try again or terminate? Conditional edges are how LangGraph implements this branching logic.

Adding Conditional Edges

A conditional edge evaluates the current state and returns the name of the next node to execute:

from langgraph.graph import StateGraph, START, END
from typing import TypedDict, Annotated, Literal
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    needs_tool: bool

def router(state: AgentState) -> Literal["tool_node", "respond"]:
    if state["needs_tool"]:
        return "tool_node"
    return "respond"

builder = StateGraph(AgentState)
builder.add_node("agent", agent_node)
builder.add_node("tool_node", tool_node)
builder.add_node("respond", respond_node)

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", router)
builder.add_edge("tool_node", "agent")
builder.add_edge("respond", END)

graph = builder.compile()

The router function inspects state and returns a string matching one of the registered node names. LangGraph calls this function after the source node completes and routes execution accordingly.

Router Functions with LLM Output

The most common pattern checks whether the LLM response contains tool calls:

from langchain_core.messages import AIMessage

def should_use_tools(state: AgentState) -> Literal["tools", "end"]:
    last_message = state["messages"][-1]
    if isinstance(last_message, AIMessage) and last_message.tool_calls:
        return "tools"
    return "end"

builder.add_conditional_edges("agent", should_use_tools, {
    "tools": "tool_node",
    "end": END,
})

The optional third argument to add_conditional_edges is a mapping from return values to node names. This decouples the router logic from the exact node names in the graph.

Multi-Path Branching

Routers can return more than two destinations. Use this for classification-style routing:

def classify_query(state: AgentState) -> Literal[
    "search", "calculate", "database", "clarify"
]:
    last_msg = state["messages"][-1].content.lower()

    if "search" in last_msg or "find" in last_msg:
        return "search"
    elif "calculate" in last_msg or "math" in last_msg:
        return "calculate"
    elif "query" in last_msg or "database" in last_msg:
        return "database"
    else:
        return "clarify"

builder.add_conditional_edges("classifier", classify_query)

Each branch leads to a specialized node that handles that category of request. The classifier node uses the LLM to categorize intent, then the router directs execution to the appropriate handler.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Implementing Cycles with Conditional Edges

Cycles are what make agents truly powerful. An agent loop typically looks like this: reason, optionally call tools, then decide whether to continue or stop:

def agent_loop_router(state: AgentState) -> Literal["tools", "finish"]:
    messages = state["messages"]
    last = messages[-1]

    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "finish"

builder.add_node("agent", call_model)
builder.add_node("tools", execute_tools)
builder.add_node("finish", format_response)

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", agent_loop_router)
builder.add_edge("tools", "agent")  # cycle back
builder.add_edge("finish", END)

The edge from tools back to agent creates a cycle. The agent keeps calling tools until the LLM decides it has enough information, at which point the router sends execution to the finish node.

Guard Rails with State Counters

Prevent infinite loops by tracking iteration counts in state:

class SafeAgentState(TypedDict):
    messages: Annotated[list, add_messages]
    loop_count: int

def safe_router(state: SafeAgentState) -> Literal["tools", "finish"]:
    if state["loop_count"] >= 5:
        return "finish"
    last = state["messages"][-1]
    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "finish"

def increment_and_call(state: SafeAgentState) -> dict:
    response = llm.invoke(state["messages"])
    return {
        "messages": [response],
        "loop_count": state["loop_count"] + 1,
    }

This guarantees the agent terminates after at most 5 iterations, regardless of the LLM output.

FAQ

Can a conditional edge route to END directly?

Yes. You can return END from a router function or map a return value to END in the edge mapping. This is the standard way to terminate a workflow from a conditional branch.

What happens if the router returns a node name that does not exist?

LangGraph raises a ValueError at compile time if you use the mapping dictionary, or at runtime if the returned string does not match any registered node. Always use Literal type hints to catch mismatches early.

Can I have multiple conditional edges from the same node?

No. Each node can have only one outgoing edge definition — either a fixed edge or a conditional edge. If you need multiple branching decisions, chain them through intermediate nodes that each evaluate one condition.


#LangGraph #ConditionalRouting #AgentWorkflows #DecisionLogic #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.