Skip to content
Learn Agentic AI12 min read0 views

Integration Testing Agent Pipelines: End-to-End Tests with Real LLM Calls

Learn how to structure integration tests for AI agent pipelines that make real LLM calls, manage API costs, use snapshot testing, and run safely in CI/CD.

When Unit Tests Are Not Enough

Unit tests with mocked LLMs verify your agent's logic in isolation, but they cannot catch prompt regressions, model behavior changes, or integration failures between components. Integration tests that make real LLM calls fill this gap — they validate that your full pipeline works correctly from input to final output.

The challenge is managing cost, speed, and non-determinism. A well-designed integration test suite runs on a schedule rather than every commit, uses cost controls, and evaluates outputs semantically rather than with exact string matching.

Test Structure for Agent Integration Tests

Organize integration tests separately from unit tests so they can run on different schedules.

# tests/integration/conftest.py
import os
import pytest

def pytest_configure(config):
    config.addinivalue_line("markers", "integration: real LLM calls (slow, costs tokens)")

@pytest.fixture(scope="session")
def api_key():
    key = os.environ.get("OPENAI_API_KEY")
    if not key:
        pytest.skip("OPENAI_API_KEY not set — skipping integration tests")
    return key

@pytest.fixture(scope="session")
def agent(api_key):
    from my_agent.core import Agent
    return Agent(api_key=api_key, model="gpt-4o-mini")  # cheaper model for tests

Run integration tests separately using pytest markers:

# Unit tests only (fast, every commit)
pytest -m "not integration"

# Integration tests only (scheduled, costs tokens)
pytest -m integration --timeout=120

API Key Management in CI

Never hardcode API keys. Use CI secrets and environment variables.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

# .github/workflows/integration-tests.yml
name: Agent Integration Tests
on:
  schedule:
    - cron: "0 6 * * 1"  # Weekly on Monday at 6am
  workflow_dispatch: {}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install -e ".[test]"
      - run: pytest -m integration --timeout=120
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY_TEST }}

Cost Control Strategies

Prevent runaway costs with budget caps and smart model selection.

import pytest
from dataclasses import dataclass

@dataclass
class TokenBudget:
    max_tokens: int = 50_000
    used_tokens: int = 0

    def check(self, tokens_used: int):
        self.used_tokens += tokens_used
        if self.used_tokens > self.max_tokens:
            pytest.skip(f"Token budget exhausted: {self.used_tokens}/{self.max_tokens}")

@pytest.fixture(scope="session")
def token_budget():
    return TokenBudget(max_tokens=50_000)

@pytest.mark.integration
def test_agent_answers_question(agent, token_budget):
    result = agent.run("What is the capital of France?")
    token_budget.check(result.usage.total_tokens)
    assert "paris" in result.output.lower()

Snapshot Testing for LLM Outputs

Exact string matching fails because LLM outputs vary. Use semantic snapshot testing instead.

import json
from pathlib import Path

SNAPSHOT_DIR = Path(__file__).parent / "snapshots"

def semantic_match(actual: str, expected: str, threshold: float = 0.8) -> bool:
    """Check if actual output covers the key points in expected."""
    expected_keywords = set(expected.lower().split())
    actual_lower = actual.lower()
    matches = sum(1 for kw in expected_keywords if kw in actual_lower)
    return (matches / len(expected_keywords)) >= threshold

@pytest.mark.integration
def test_agent_summarizes_article(agent):
    article = "Python 3.13 introduces a JIT compiler and removes the GIL..."
    result = agent.run(f"Summarize this: {article}")

    # Save snapshot for manual review
    snapshot_path = SNAPSHOT_DIR / "summarize_article.json"
    snapshot_path.parent.mkdir(exist_ok=True)
    snapshot_path.write_text(json.dumps({
        "input": article,
        "output": result.output,
        "model": result.model,
    }, indent=2))

    # Semantic assertion
    assert semantic_match(result.output, "Python JIT compiler GIL removed")

Handling Non-Determinism

Use flexible assertions that check for meaning rather than exact text.

@pytest.mark.integration
def test_agent_tool_selection(agent):
    """Verify the agent calls the correct tool, regardless of phrasing."""
    result = agent.run("What is the weather in Tokyo?")

    assert result.tool_calls is not None, "Agent should have called a tool"
    tool_names = [tc.function.name for tc in result.tool_calls]
    assert "get_weather" in tool_names
    args = json.loads(result.tool_calls[0].function.arguments)
    assert "tokyo" in args.get("location", "").lower()

FAQ

How often should integration tests run?

Run them on a schedule — daily or weekly — rather than on every commit. This balances cost against coverage. Also run them on-demand before major releases or after prompt changes.

Which model should integration tests use?

Use the cheapest model that still exercises your pipeline — typically gpt-4o-mini or gpt-3.5-turbo. Only test with your production model in a final pre-release validation step.

How do I debug a flaky integration test?

Log the full request and response for every LLM call during test runs. When a test fails, the log shows exactly what the model returned. Use a --save-traces flag to write these logs only on failure.


#IntegrationTesting #AIAgents #EndtoEndTesting #Pytest #Python #CICD #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.