LangGraph: Building Stateful Multi-Agent Workflows with Graphs
Learn LangGraph's graph-based approach to building stateful, multi-step AI workflows — including nodes, edges, conditional routing, state management, and human-in-the-loop patterns.
Why LangGraph Over Plain Agents
LangChain agents follow a linear loop: reason, act, observe, repeat. This works for simple tool-using agents, but falls short for complex workflows that need branching logic, parallel execution, human approval steps, or multiple specialized agents collaborating.
LangGraph models workflows as directed graphs. Each node is a function that transforms state. Edges define the flow between nodes, and conditional edges enable dynamic routing. The state is a typed object that persists across the entire execution, and checkpointing lets you pause, resume, or replay workflows.
Core Concepts
A LangGraph workflow has four elements:
- State — a typed dictionary that flows through the graph
- Nodes — functions that read and modify state
- Edges — connections between nodes (static or conditional)
- Graph — the compiled workflow that orchestrates execution
Defining State
State is a TypedDict that represents all the information your workflow needs.
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
next_action: str
retry_count: int
The Annotated type with add_messages tells LangGraph to append new messages to the list rather than replacing it. This is how conversation history accumulates across nodes.
Building a Simple Graph
Here is a basic two-node workflow: one node generates a response, another checks if the response is satisfactory.
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
is_satisfactory: bool
llm = ChatOpenAI(model="gpt-4o-mini")
def generate(state: State) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
def evaluate(state: State) -> dict:
last_message = state["messages"][-1].content
is_good = len(last_message) > 50 # Simple quality check
return {"is_satisfactory": is_good}
# Build the graph
graph = StateGraph(State)
graph.add_node("generate", generate)
graph.add_node("evaluate", evaluate)
graph.add_edge(START, "generate")
graph.add_edge("generate", "evaluate")
# Conditional edge: retry or finish
def should_retry(state: State) -> str:
if state["is_satisfactory"]:
return "end"
return "retry"
graph.add_conditional_edges(
"evaluate",
should_retry,
{"end": END, "retry": "generate"},
)
# Compile and run
app = graph.compile()
result = app.invoke({
"messages": [("human", "Write a haiku about Python programming")],
"is_satisfactory": False,
})
print(result["messages"][-1].content)
The graph generates a response, evaluates it, and retries if the evaluation fails. This retry loop is trivial in a graph but awkward to implement in a linear agent.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Multi-Agent Collaboration
LangGraph excels at orchestrating multiple specialized agents. Each agent is a node, and a router decides which agent handles the next step.
from langgraph.graph import StateGraph, START, END
class MultiAgentState(TypedDict):
messages: Annotated[list, add_messages]
current_agent: str
def router(state: MultiAgentState) -> dict:
last_msg = state["messages"][-1].content.lower()
if "code" in last_msg or "bug" in last_msg:
return {"current_agent": "coder"}
elif "research" in last_msg or "find" in last_msg:
return {"current_agent": "researcher"}
return {"current_agent": "generalist"}
def coder_agent(state: MultiAgentState) -> dict:
response = coding_llm.invoke(state["messages"])
return {"messages": [response]}
def researcher_agent(state: MultiAgentState) -> dict:
response = research_llm.invoke(state["messages"])
return {"messages": [response]}
def generalist_agent(state: MultiAgentState) -> dict:
response = general_llm.invoke(state["messages"])
return {"messages": [response]}
graph = StateGraph(MultiAgentState)
graph.add_node("router", router)
graph.add_node("coder", coder_agent)
graph.add_node("researcher", researcher_agent)
graph.add_node("generalist", generalist_agent)
graph.add_edge(START, "router")
graph.add_conditional_edges(
"router",
lambda s: s["current_agent"],
{"coder": "coder", "researcher": "researcher", "generalist": "generalist"},
)
graph.add_edge("coder", END)
graph.add_edge("researcher", END)
graph.add_edge("generalist", END)
app = graph.compile()
Human-in-the-Loop with Checkpointing
LangGraph's checkpointer lets you pause execution for human review and resume later.
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = graph.compile(
checkpointer=checkpointer,
interrupt_before=["execute_action"], # Pause before this node
)
config = {"configurable": {"thread_id": "user-123"}}
# Run until the interrupt point
result = app.invoke(
{"messages": [("human", "Delete all inactive users")]},
config=config,
)
# Execution pauses before "execute_action"
# Human reviews and approves
# Resume execution
result = app.invoke(None, config=config)
The interrupt_before parameter pauses the graph before the specified node executes. State is saved to the checkpointer, so you can resume from a different process or after a server restart.
Streaming Graph Execution
LangGraph supports streaming at multiple levels.
# Stream state updates from each node
for event in app.stream(
{"messages": [("human", "Analyze this data")]},
stream_mode="updates",
):
print(event)
# Stream individual tokens from LLM nodes
async for event in app.astream_events(
{"messages": [("human", "Write an essay")]},
version="v2",
):
if event["event"] == "on_chat_model_stream":
print(event["data"]["chunk"].content, end="")
FAQ
When should I use LangGraph instead of a simple LangChain agent?
Use LangGraph when your workflow needs branching logic, multiple agents, human approval steps, or persistent state across interactions. For a single agent with a few tools that operates in a straightforward loop, AgentExecutor is simpler and sufficient.
How does LangGraph handle state persistence in production?
LangGraph supports multiple checkpointer backends. MemorySaver is for development. For production, use SqliteSaver, PostgresSaver, or implement a custom checkpointer backed by Redis or your preferred database. State is serialized and restored automatically.
Can LangGraph nodes run in parallel?
Yes. When multiple edges lead from the same node to different nodes without dependencies between them, LangGraph can execute those nodes concurrently. Use the Send API for map-reduce patterns where you dynamically create parallel branches at runtime.
#LangGraph #MultiAgent #StateManagement #Workflow #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.