Migrating from LangChain to OpenAI Agents SDK: A Practical Guide
A hands-on guide to migrating AI agent code from LangChain to the OpenAI Agents SDK. Covers concept mapping, code translation, testing strategies, and gradual migration paths.
Why Teams Migrate from LangChain
LangChain was the first widely adopted framework for building LLM applications, and it earned that position by moving fast. But as production requirements matured, teams encountered pain points: deep abstraction layers that obscured what prompts actually reached the model, rapidly changing APIs with frequent breaking changes, and heavyweight dependency trees.
The OpenAI Agents SDK takes a different approach: minimal abstractions, explicit control flow, and built-in primitives for the patterns that matter most in production — tool calling, agent handoffs, guardrails, and tracing.
Concept Mapping: LangChain to Agents SDK
Understanding the conceptual mapping is the first step. Here is how the core primitives translate:
| LangChain | OpenAI Agents SDK | Notes |
|---|---|---|
ChatOpenAI |
Agent(model="gpt-4o") |
Model config lives on the Agent |
Tool / @tool |
@function_tool |
Decorator-based, type-safe |
AgentExecutor |
Runner.run() |
Manages the agent loop |
ConversationBufferMemory |
Conversation history in input |
Explicit message list |
Chain |
Agent handoffs | Compose via handoffs=[] |
OutputParser |
output_type=MyModel |
Pydantic model on Agent |
Translating a LangChain Agent to Agents SDK
Here is a typical LangChain agent that looks up product information:
# ── LangChain version ──
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate
@tool
def lookup_product(product_id: str) -> str:
"""Look up product details by ID."""
# database call here
return f"Product {product_id}: Widget Pro, $49.99, in stock"
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a product assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, [lookup_product], prompt)
executor = AgentExecutor(agent=agent, tools=[lookup_product])
result = executor.invoke({"input": "Tell me about product P-1234"})
And here is the equivalent in the OpenAI Agents SDK:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
# ── OpenAI Agents SDK version ──
from agents import Agent, Runner, function_tool
@function_tool
def lookup_product(product_id: str) -> str:
"""Look up product details by ID."""
return f"Product {product_id}: Widget Pro, $49.99, in stock"
agent = Agent(
name="Product Assistant",
instructions="You are a product assistant.",
model="gpt-4o",
tools=[lookup_product],
)
result = Runner.run_sync(agent, "Tell me about product P-1234")
print(result.final_output)
The SDK version is roughly half the code. The agent loop, tool execution, and response parsing are handled internally by Runner.
Migrating Chains to Handoffs
LangChain uses chains to compose multiple steps. The Agents SDK uses handoffs to delegate between specialized agents.
from agents import Agent, Runner
billing_agent = Agent(
name="Billing Agent",
instructions="Handle billing questions. Access account data.",
model="gpt-4o",
)
shipping_agent = Agent(
name="Shipping Agent",
instructions="Handle shipping and delivery questions.",
model="gpt-4o",
)
triage_agent = Agent(
name="Triage Agent",
instructions="Route the user to the right specialist agent.",
model="gpt-4o",
handoffs=[billing_agent, shipping_agent],
)
result = Runner.run_sync(triage_agent, "Where is my order?")
print(result.final_output)
Gradual Migration Strategy
Do not rewrite everything at once. Migrate one agent or chain at a time.
# Compatibility wrapper: run both and compare
async def migrate_with_comparison(user_input: str):
langchain_result = executor.invoke({"input": user_input})
sdk_result = Runner.run_sync(agent, user_input)
match = langchain_result["output"] == sdk_result.final_output
log_comparison(user_input, langchain_result, sdk_result, match)
# Return SDK result when confidence is high
return sdk_result.final_output
FAQ
Can the Agents SDK work with non-OpenAI models like LangChain does?
Yes. The Agents SDK supports any model via the LiteLLM integration. Install openai-agents[litellm] and use model strings like litellm/anthropic/claude-sonnet-4-20250514. The tool calling and handoff mechanics work the same regardless of the model provider.
How do I migrate LangChain memory to the Agents SDK?
The Agents SDK does not have a built-in memory abstraction. Instead, you pass conversation history explicitly as a list of messages in the input parameter. Extract your existing conversation history from LangChain memory stores and format it as standard message dicts.
What about LangChain's document loaders and vector store integrations?
Those are data pipeline tools, not agent framework features. You can keep using LangChain's document loaders and vector stores alongside the Agents SDK. Wrap the retrieval logic in a @function_tool and the agent calls it like any other tool.
#LangChain #OpenAIAgentsSDK #Migration #Python #FrameworkMigration #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.