Skip to content
Learn Agentic AI11 min read0 views

Database Migrations for AI Agent Systems: Alembic and Prisma Migrate Patterns

Master database migration workflows for AI agent systems using Alembic and Prisma Migrate including data migrations, rollback strategies, zero-downtime deployments, and CI integration patterns.

Why Migrations Matter for Agent Systems

AI agent systems evolve rapidly. You add new tool tables, extend message schemas with token tracking, introduce conversation tagging, or restructure metadata columns. Every schema change must be applied consistently across development, staging, and production databases without losing data or causing downtime.

Database migrations encode schema changes as versioned, ordered scripts that can be applied forward and rolled back. They are the source of truth for your database structure and should live in version control alongside your application code.

Alembic Migration Workflows

Alembic is the standard migration tool for Python applications using SQLAlchemy. Initialize it in your project:

pip install alembic
alembic init alembic

Configure alembic/env.py to use your SQLAlchemy models:

# alembic/env.py
from app.models import Base
from app.config import DATABASE_URL

config = context.config
config.set_main_option("sqlalchemy.url", DATABASE_URL)
target_metadata = Base.metadata

Generate a migration from your model changes:

alembic revision --autogenerate -m "add_tool_executions_table"

This compares your SQLAlchemy models against the current database state and generates the appropriate ALTER statements. Always review the generated migration:

# alembic/versions/001_add_tool_executions_table.py
"""add tool_executions table"""

from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import JSONB, UUID

revision = "001_add_tool_exec"
down_revision = "000_initial"


def upgrade():
    op.create_table(
        "tool_executions",
        sa.Column("id", UUID(as_uuid=True), primary_key=True),
        sa.Column("message_id", UUID(as_uuid=True),
                  sa.ForeignKey("messages.id", ondelete="CASCADE"),
                  nullable=False),
        sa.Column("tool_name", sa.Text(), nullable=False),
        sa.Column("input_args", JSONB, server_default="{}"),
        sa.Column("output_data", JSONB),
        sa.Column("duration_ms", sa.Integer()),
        sa.Column("created_at", sa.DateTime(timezone=True),
                  server_default=sa.func.now()),
    )
    op.create_index(
        "idx_tool_exec_message",
        "tool_executions",
        ["message_id"],
    )


def downgrade():
    op.drop_index("idx_tool_exec_message")
    op.drop_table("tool_executions")

Data Migrations

Schema migrations change table structure. Data migrations transform existing data. Agent systems frequently need both — for example, splitting a metadata JSONB column into dedicated columns:

"""split metadata into dedicated columns"""

from alembic import op
import sqlalchemy as sa

revision = "005_split_metadata"
down_revision = "004_add_tags"


def upgrade():
    # Step 1: Add new columns
    op.add_column("conversations",
        sa.Column("model_name", sa.Text()))
    op.add_column("conversations",
        sa.Column("temperature", sa.Float()))

    # Step 2: Backfill from JSONB
    op.execute("""
        UPDATE conversations
        SET model_name = metadata->>'model',
            temperature = (metadata->>'temperature')::float
        WHERE metadata ? 'model'
    """)

    # Step 3: Add NOT NULL constraint after backfill
    op.alter_column("conversations", "model_name",
                    nullable=False, server_default="gpt-4o")


def downgrade():
    op.drop_column("conversations", "temperature")
    op.drop_column("conversations", "model_name")

Always separate the schema change (add column) from the data migration (backfill) and the constraint change (add NOT NULL). This three-step approach works safely with zero-downtime deployments.

Prisma Migrate Patterns

Prisma Migrate generates SQL migrations from schema changes:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

# Create a migration
npx prisma migrate dev --name add_tool_executions

# Apply in production
npx prisma migrate deploy

Prisma generates a migrations/ directory with timestamped SQL files. For data migrations that Prisma cannot express declaratively, create an empty migration and write custom SQL:

npx prisma migrate dev --create-only --name backfill_model_names

Then edit the generated SQL file:

-- prisma/migrations/20260317_backfill_model_names/migration.sql
UPDATE conversations
SET model_name = COALESCE(metadata->>'model', 'gpt-4o')
WHERE model_name IS NULL;

Rollback Strategies

Alembic supports explicit downgrades:

# Roll back one migration
alembic downgrade -1

# Roll back to a specific revision
alembic downgrade 004_add_tags

Prisma Migrate does not support automatic rollbacks. Instead, create a new forward migration that reverses the change. This is actually the safer approach for production systems — forward-only migrations are simpler to reason about in CI/CD pipelines.

For both tools, always test the rollback path in staging before deploying to production. A migration that cannot be rolled back safely needs a more careful deployment strategy.

CI Integration

Add migration validation to your CI pipeline:

# .github/workflows/migrations.yml
name: Migration Check
on: [pull_request]

jobs:
  check-migrations:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_DB: test_agents
          POSTGRES_PASSWORD: test
        ports: ["5432:5432"]
    steps:
      - uses: actions/checkout@v4
      - name: Apply migrations
        run: |
          alembic upgrade head
          alembic downgrade base
          alembic upgrade head
        env:
          DATABASE_URL: postgresql://postgres:test@localhost/test_agents

This "up-down-up" pattern verifies that both upgrade and downgrade paths work correctly. Any migration that fails this cycle is caught before it reaches production.

FAQ

How do I handle migrations when multiple developers are working on the same schema?

Use a linear migration chain — each migration has exactly one parent. When two developers create migrations simultaneously, the second one to merge must rebase their migration on top of the first. Both Alembic and Prisma detect branch conflicts and refuse to apply them.

Should I squash old migrations?

Yes, periodically. Once all environments are past a certain migration, squash everything before it into a single baseline migration. This speeds up fresh database setups and reduces the number of files in the migrations directory. In Alembic, use alembic merge to combine branches, or replace old migrations with a single CREATE TABLE snapshot.

How do I run migrations in Kubernetes?

Run migrations as an init container or a Kubernetes Job that executes before the application pods start. Never run migrations inside the application startup code — concurrent pods would race to apply the same migration. Use a Job with backoffLimit: 1 and make your deployment depend on its completion.


#DatabaseMigrations #Alembic #PrismaMigrate #CICD #AIAgents #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.