Migrating Between Agent Frameworks: Practical Guide to Switching Without Rewriting
Learn how to migrate between agent frameworks using abstraction layers, interface design, gradual migration strategies, and comprehensive testing to avoid costly full rewrites.
Why Framework Migrations Happen
Framework migrations are inevitable in a fast-moving space. Teams switch for legitimate reasons: the original framework does not support a needed feature, performance requirements change, the team grows and needs better enterprise tooling, or a new framework genuinely solves their problems better.
The cost of migration depends entirely on how tightly coupled your code is to the framework. Teams that built their entire application logic inside framework-specific abstractions face a rewrite. Teams that kept a clean separation between business logic and orchestration can switch frameworks in days.
The Abstraction Layer Pattern
The most effective migration strategy is one you implement before you need to migrate: an abstraction layer that isolates your business logic from the framework.
# abstractions/agent.py — Framework-independent interfaces
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any, Callable
@dataclass
class ToolResult:
content: str
error: str | None = None
@dataclass
class AgentResponse:
text: str
tool_calls_made: list[str]
tokens_used: int
class AgentTool(ABC):
@property
@abstractmethod
def name(self) -> str: ...
@property
@abstractmethod
def description(self) -> str: ...
@abstractmethod
async def execute(self, **kwargs) -> ToolResult: ...
class AgentRunner(ABC):
@abstractmethod
async def run(self, message: str, tools: list[AgentTool]) -> AgentResponse: ...
Your business logic tools implement AgentTool:
# tools/weather.py — Framework-independent tool
from abstractions.agent import AgentTool, ToolResult
import httpx
class WeatherTool(AgentTool):
@property
def name(self) -> str:
return "get_weather"
@property
def description(self) -> str:
return "Get current weather for a city"
async def execute(self, city: str) -> ToolResult:
async with httpx.AsyncClient() as client:
resp = await client.get(f"https://api.weather.example/v1/{city}")
return ToolResult(content=resp.text)
Then you write thin adapters for each framework:
# adapters/openai_agents.py
from agents import Agent, Runner, function_tool
from abstractions.agent import AgentRunner, AgentTool, AgentResponse
class OpenAIAgentsRunner(AgentRunner):
def __init__(self, model: str = "gpt-4o", instructions: str = ""):
self.model = model
self.instructions = instructions
async def run(self, message: str, tools: list[AgentTool]) -> AgentResponse:
# Convert abstract tools to framework-specific tools
sdk_tools = []
for t in tools:
@function_tool(name_override=t.name, description_override=t.description)
async def wrapper(**kwargs, _tool=t):
result = await _tool.execute(**kwargs)
return result.content
sdk_tools.append(wrapper)
agent = Agent(
name="Assistant",
instructions=self.instructions,
tools=sdk_tools,
model=self.model,
)
result = await Runner.run(agent, message)
return AgentResponse(
text=result.final_output,
tool_calls_made=[],
tokens_used=result.raw_responses[-1].usage.total_tokens,
)
# adapters/langchain_runner.py
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.tools import StructuredTool
from abstractions.agent import AgentRunner, AgentTool, AgentResponse
class LangChainRunner(AgentRunner):
def __init__(self, model: str = "gpt-4o", instructions: str = ""):
self.model = model
self.instructions = instructions
async def run(self, message: str, tools: list[AgentTool]) -> AgentResponse:
lc_tools = [
StructuredTool.from_function(
func=t.execute,
name=t.name,
description=t.description,
coroutine=t.execute,
)
for t in tools
]
llm = ChatOpenAI(model=self.model)
# ... set up agent executor
result = await executor.ainvoke({"input": message})
return AgentResponse(
text=result["output"],
tool_calls_made=[],
tokens_used=0,
)
Now switching frameworks is a one-line change:
# Switch from OpenAI Agents SDK to LangChain
# runner = OpenAIAgentsRunner(instructions="You are helpful.")
runner = LangChainRunner(instructions="You are helpful.")
# All your tools work unchanged
tools = [WeatherTool(), CalculatorTool(), DatabaseTool()]
response = await runner.run("What is the weather in NYC?", tools)
Gradual Migration Strategy
Full framework rewrites are risky. Instead, migrate gradually:
Phase 1 — Introduce the abstraction layer. Wrap your existing framework behind the abstract interface. All existing code continues to work through the current adapter. No behavior changes.
Phase 2 — Migrate tools. Move tool implementations from framework-specific code to the framework-independent AgentTool interface. Test each tool independently.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Phase 3 — Build the new adapter. Implement the AgentRunner interface for the target framework. Run both adapters in parallel to compare outputs.
Phase 4 — Switch traffic. Route a percentage of requests to the new framework using feature flags. Monitor for regressions.
# Feature flag for gradual rollout
import random
def get_runner() -> AgentRunner:
if random.random() < float(os.getenv("NEW_FRAMEWORK_PERCENTAGE", "0")):
return LangChainRunner(instructions="You are helpful.")
return OpenAIAgentsRunner(instructions="You are helpful.")
Phase 5 — Remove the old adapter. Once all traffic is on the new framework and monitoring confirms stability, delete the old adapter code.
Testing During Migration
The abstraction layer makes testing straightforward. You can write tests against the abstract interface that validate behavior regardless of the underlying framework:
import pytest
from abstractions.agent import AgentRunner, AgentResponse
from adapters.openai_agents import OpenAIAgentsRunner
from adapters.langchain_runner import LangChainRunner
@pytest.fixture(params=["openai", "langchain"])
def runner(request) -> AgentRunner:
if request.param == "openai":
return OpenAIAgentsRunner(instructions="You are helpful.")
return LangChainRunner(instructions="You are helpful.")
@pytest.mark.asyncio
async def test_weather_tool_called(runner: AgentRunner):
"""Both frameworks should successfully use the weather tool."""
tools = [WeatherTool()]
response = await runner.run("What is the weather in Tokyo?", tools)
assert "Tokyo" in response.text
assert response.text # Non-empty response
Running the same test suite against both adapters catches behavioral differences between frameworks before they reach production.
Common Migration Pitfalls
Migrating prompt templates: Frameworks handle system prompts, conversation history, and tool descriptions differently. Prompts optimized for one framework may perform poorly on another. Budget time for prompt tuning after migration.
Streaming behavior differences: Streaming APIs vary significantly between frameworks. Some stream tokens, others stream events, and the event schemas differ. If your application depends on streaming, test the streaming path thoroughly.
Error handling semantics: How a framework handles tool execution errors, rate limits, and malformed LLM responses varies. Map these cases explicitly in your adapter.
Hidden state management: Some frameworks maintain conversation state implicitly. When migrating, make sure you are explicitly managing state in your abstraction layer rather than relying on framework internals.
FAQ
Is the abstraction layer worth the overhead if I might never migrate?
Yes. The abstraction layer also improves testability (you can mock the runner), makes it easier to A/B test different models or providers, and keeps your business logic clean. It pays for itself even if you never switch frameworks.
How do I handle framework-specific features that do not map to the abstraction?
Add optional capabilities to your interface. For example, if only one framework supports native guardrails, add an optional guardrails parameter that the adapter uses if available and ignores otherwise. Do not let the abstraction become a lowest-common-denominator interface — extend it for valuable features.
What about multi-agent patterns that differ between frameworks?
Multi-agent orchestration is harder to abstract because the patterns vary significantly (handoffs vs. group chat vs. crews). For multi-agent systems, the abstraction layer works best at the individual agent level. The orchestration logic may remain framework-specific, but the agents and tools within it stay portable.
#AgentMigration #SoftwareArchitecture #AgentFrameworks #Refactoring #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.