API Integration Tools: Connecting AI Agents to REST and GraphQL APIs
Build generic API integration tools that let AI agents call REST and GraphQL endpoints with proper authentication, error handling, retry logic, and response mapping. Learn patterns for building flexible yet safe API tools.
The API Tool Pattern
Most agent workflows eventually need to call external APIs — fetching data from a CRM, creating tickets in a project tracker, sending notifications, or querying third-party services. Rather than building a separate tool for every API endpoint, you can build a generic API caller that the agent configures per request.
This post shows two approaches: a constrained approach with predefined API configurations, and a flexible approach with a generic HTTP tool.
Approach 1: Predefined API Configurations
The safest pattern pre-registers each API the agent can call, including auth credentials and allowed endpoints:
from dataclasses import dataclass
from typing import Optional
import httpx
@dataclass
class APIConfig:
name: str
base_url: str
auth_header: str
auth_value: str
allowed_methods: set[str]
allowed_paths: list[str] # regex patterns
timeout: int = 15
class APIToolkit:
def __init__(self):
self.apis: dict[str, APIConfig] = {}
self.client = httpx.AsyncClient(timeout=15)
def register_api(self, config: APIConfig):
self.apis[config.name] = config
async def call_api(
self,
api_name: str,
method: str,
path: str,
params: dict = None,
body: dict = None,
) -> str:
import json
import re
if api_name not in self.apis:
return f"Error: Unknown API '{api_name}'. Available: {list(self.apis.keys())}"
config = self.apis[api_name]
if method.upper() not in config.allowed_methods:
return f"Error: Method {method} not allowed for {api_name}"
path_allowed = any(
re.match(pattern, path) for pattern in config.allowed_paths
)
if not path_allowed:
return f"Error: Path {path} not allowed for {api_name}"
url = f"{config.base_url.rstrip('/')}/{path.lstrip('/')}"
try:
response = await self.client.request(
method=method.upper(),
url=url,
params=params,
json=body,
headers={config.auth_header: config.auth_value},
)
response.raise_for_status()
data = response.json()
result = json.dumps(data, indent=2, default=str)
if len(result) > 5000:
result = result[:5000] + "\n[Response truncated]"
return result
except httpx.HTTPStatusError as e:
return f"API error: HTTP {e.response.status_code} - {e.response.text[:500]}"
except Exception as e:
return f"Request failed: {str(e)}"
Register APIs at startup with their credentials kept server-side:
import os
toolkit = APIToolkit()
toolkit.register_api(APIConfig(
name="github",
base_url="https://api.github.com",
auth_header="Authorization",
auth_value=f"Bearer {os.environ['GITHUB_TOKEN']}",
allowed_methods={"GET"},
allowed_paths=[
r"/repos/[\w-]+/[\w-]+$",
r"/repos/[\w-]+/[\w-]+/issues.*",
r"/repos/[\w-]+/[\w-]+/pulls.*",
],
))
toolkit.register_api(APIConfig(
name="slack",
base_url="https://slack.com/api",
auth_header="Authorization",
auth_value=f"Bearer {os.environ['SLACK_TOKEN']}",
allowed_methods={"GET", "POST"},
allowed_paths=[
r"/chat.postMessage$",
r"/channels.list$",
r"/conversations.history$",
],
))
The LLM never sees API keys. It only knows the API name and the paths it can call.
The Tool Schema for Predefined APIs
api_tool_schema = {
"type": "function",
"function": {
"name": "call_api",
"description": "Call a registered external API. Available APIs: github (GET repos, issues, PRs), slack (GET/POST messages, channels). Auth is handled automatically.",
"parameters": {
"type": "object",
"properties": {
"api_name": {
"type": "string",
"enum": ["github", "slack"],
"description": "Which API to call"
},
"method": {
"type": "string",
"enum": ["GET", "POST", "PUT", "PATCH", "DELETE"],
"description": "HTTP method"
},
"path": {
"type": "string",
"description": "API endpoint path, e.g. /repos/owner/repo/issues"
},
"params": {
"type": "object",
"description": "URL query parameters as key-value pairs"
},
"body": {
"type": "object",
"description": "Request body for POST/PUT/PATCH requests"
}
},
"required": ["api_name", "method", "path"]
}
}
}
Approach 2: GraphQL Tool
For GraphQL APIs, the tool accepts a query string and variables:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
class GraphQLTool:
def __init__(self, endpoint: str, headers: dict):
self.endpoint = endpoint
self.headers = headers
self.client = httpx.AsyncClient(timeout=15)
async def execute(self, query: str, variables: dict = None) -> str:
import json
if any(keyword in query.upper() for keyword in ["MUTATION", "DELETE"]):
return "Error: Only queries are allowed, not mutations"
try:
response = await self.client.post(
self.endpoint,
json={"query": query, "variables": variables or {}},
headers=self.headers,
)
data = response.json()
if "errors" in data:
return f"GraphQL errors: {json.dumps(data['errors'], indent=2)}"
return json.dumps(data.get("data", {}), indent=2, default=str)
except Exception as e:
return f"GraphQL request failed: {str(e)}"
Retry Logic with Exponential Backoff
API calls fail. Build retry logic into your toolkit:
import asyncio
async def call_with_retry(
func,
max_retries: int = 3,
base_delay: float = 1.0,
**kwargs,
) -> str:
for attempt in range(max_retries):
result = await func(**kwargs)
if not result.startswith("Error:") and not result.startswith("API error:"):
return result
if "HTTP 429" in result or "HTTP 5" in result:
delay = base_delay * (2 ** attempt)
await asyncio.sleep(delay)
continue
# Non-retryable error
return result
return result # Return last error after all retries exhausted
Only retry on transient errors (429 rate limits, 5xx server errors). Client errors (400, 401, 404) should not be retried.
Response Mapping
API responses often contain more data than the LLM needs. Map responses to concise formats:
def map_github_issue(raw: dict) -> dict:
return {
"number": raw["number"],
"title": raw["title"],
"state": raw["state"],
"author": raw["user"]["login"],
"labels": [l["name"] for l in raw.get("labels", [])],
"created": raw["created_at"],
"comments": raw["comments"],
}
Smaller, cleaner tool results mean less token usage and better LLM comprehension.
FAQ
Should the LLM know about API authentication details?
Never. API keys, tokens, and credentials must be stored server-side and injected by your tool implementation. The LLM should only know the API name and what endpoints are available. Exposing credentials risks leaking them through the LLM's output.
How do I handle paginated API responses?
Return the first page of results along with pagination metadata (next page token, total count). Let the LLM decide whether to fetch more pages by calling the tool again with pagination parameters. Do not automatically fetch all pages — this can result in hundreds of API calls.
When should I use predefined APIs vs a generic HTTP tool?
Use predefined APIs for production systems. They enforce strict access control and keep credentials safe. Use a generic HTTP tool only for development, prototyping, or internal tools where the user is trusted. In any system exposed to end users, always use the predefined approach.
#APIIntegration #REST #GraphQL #ToolDesign #AIAgents #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.