Human-in-the-Loop with LangGraph: Approval Gates and Manual Intervention Points
Implement human approval gates in LangGraph using interrupt_before, interrupt_after, and resume patterns to build agent workflows that pause for human review before executing sensitive actions.
Why Agents Need Human Oversight
Fully autonomous agents are powerful but dangerous in production. An agent that can send emails, modify databases, or make API calls to external services should not do so without guardrails. Human-in-the-loop patterns let you build agents that pause at critical decision points, present their intended actions to a human reviewer, and only proceed after explicit approval.
LangGraph implements this through interrupts — points in the graph where execution pauses and waits for external input before continuing.
Setting Up Interrupts
Interrupts require a checkpointer because the graph state must be persisted while waiting for human input:
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to the specified recipient."""
# Real implementation here
return f"Email sent to {to}"
tools = [send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
tool_node = ToolNode(tools)
class State(TypedDict):
messages: Annotated[list, add_messages]
checkpointer = MemorySaver()
Using interrupt_before
The interrupt_before parameter on compile() pauses execution before a specified node runs:
def call_agent(state: State) -> dict:
return {"messages": [llm.invoke(state["messages"])]}
def route(state: State) -> Literal["tools", "end"]:
last = state["messages"][-1]
if hasattr(last, "tool_calls") and last.tool_calls:
return "tools"
return "end"
builder = StateGraph(State)
builder.add_node("agent", call_agent)
builder.add_node("tools", tool_node)
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", route, {
"tools": "tools",
"end": END,
})
builder.add_edge("tools", "agent")
graph = builder.compile(
checkpointer=checkpointer,
interrupt_before=["tools"],
)
Now every time the agent wants to execute a tool, the graph pauses before the tools node runs. The caller can inspect the pending tool calls and decide whether to approve.
The Approval Loop
Here is the complete pattern for running the graph with human approval:
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "approval-demo"}}
# Initial invocation — will pause before tools
result = graph.invoke(
{"messages": [HumanMessage(content="Send an email to bob@example.com saying hello")]},
config=config,
)
# Inspect what the agent wants to do
state = graph.get_state(config)
pending_calls = state.values["messages"][-1].tool_calls
print("Agent wants to execute:")
for call in pending_calls:
print(f" {call['name']}({call['args']})")
# Human approves — resume execution with None input
approved = input("Approve? (y/n): ")
if approved.lower() == "y":
result = graph.invoke(None, config=config)
print("Execution completed:", result["messages"][-1].content)
else:
print("Execution rejected by human reviewer.")
Passing None to invoke() tells LangGraph to resume from the checkpoint without adding new input. Execution continues from exactly where it paused.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Using interrupt_after
Sometimes you want to pause after a node runs rather than before. This is useful for review-then-continue patterns:
graph = builder.compile(
checkpointer=checkpointer,
interrupt_after=["agent"],
)
With interrupt_after, the agent node completes and its output is saved to state, then execution pauses. The human can review the agent's reasoning or proposed tool calls, then resume or modify the state before continuing.
Modifying State Before Resuming
You can edit the graph state before resuming, which lets humans correct agent mistakes:
from langgraph.checkpoint.base import empty_checkpoint
# After interrupt, modify the state
graph.update_state(
config,
{"messages": [HumanMessage(content="Actually, send it to alice@example.com instead")]},
)
# Resume with the modified state
result = graph.invoke(None, config=config)
This pattern is powerful for correction workflows where the human wants to adjust the agent's plan without starting over from scratch.
Selective Interrupts
Not every tool call needs approval. You can implement selective interruption by checking tool names in a custom node:
SENSITIVE_TOOLS = {"send_email", "delete_record", "make_payment"}
def check_approval(state: State) -> Literal["needs_approval", "safe"]:
tool_calls = state["messages"][-1].tool_calls
for call in tool_calls:
if call["name"] in SENSITIVE_TOOLS:
return "needs_approval"
return "safe"
Route sensitive tool calls through an approval gate while letting safe tools execute automatically.
FAQ
Can I set a timeout for human approval?
LangGraph itself does not have a built-in timeout mechanism for interrupts. You implement timeouts in your application layer — for example, a web server that cancels the workflow if no approval arrives within a time window. The checkpointed state persists indefinitely until resumed or discarded.
What happens if I never resume an interrupted graph?
The state remains checkpointed and can be resumed at any time, even days later. The graph does not consume resources while paused. This makes interrupts suitable for asynchronous approval workflows where a human might review actions hours after the agent proposes them.
Can I combine interrupt_before and interrupt_after?
Yes. You can pass different node lists to each parameter. For example, interrupt before tool execution for approval and interrupt after the final response for quality review. Both can be active on the same compiled graph.
#LangGraph #HumanintheLoop #ApprovalGates #AgentSafety #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.