Vercel AI SDK: Building Streaming AI Interfaces with React and Next.js
Learn how to use the Vercel AI SDK to build real-time streaming AI interfaces. Covers the useChat hook, server-side streaming, tool calling with generative UI, and multi-provider support in Next.js applications.
What the Vercel AI SDK Provides
The Vercel AI SDK (also known as ai) is an open-source TypeScript library that simplifies building AI-powered user interfaces. It provides three layers: AI SDK Core for server-side model interactions, AI SDK UI for React hooks that manage chat state and streaming, and AI SDK RSC for React Server Component integration with generative UI.
Unlike using the OpenAI SDK directly, the Vercel AI SDK handles the complex plumbing of streaming responses to the browser, managing conversation state, and rendering tool results — all through a clean, declarative API.
Installation
npm install ai @ai-sdk/openai
The ai package provides the core framework. Provider packages like @ai-sdk/openai, @ai-sdk/anthropic, or @ai-sdk/google supply model-specific adapters. This separation means you can swap providers without changing application logic.
Server-Side: Creating a Streaming API Route
In a Next.js App Router project, create a route handler that streams LLM responses:
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
system: "You are a helpful TypeScript expert.",
messages,
});
return result.toDataStreamResponse();
}
The streamText function initiates streaming from the provider. The toDataStreamResponse() method converts the stream into a format the client-side hooks understand — it encodes text deltas, tool calls, and metadata into a structured data stream protocol.
Client-Side: The useChat Hook
The useChat hook manages the entire conversation lifecycle:
// app/page.tsx
"use client";
import { useChat } from "ai/react";
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({ api: "/api/chat" });
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map((m) => (
<div
key={m.id}
className={m.role === "user" ? "text-right" : "text-left"}
>
<span className="font-semibold">
{m.role === "user" ? "You" : "Assistant"}:
</span>
<p>{m.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
className="flex-1 border rounded px-3 py-2"
disabled={isLoading}
/>
<button type="submit" disabled={isLoading}>
Send
</button>
</form>
</div>
);
}
The hook handles: sending messages to the API, parsing the streaming response, appending assistant messages as tokens arrive, and managing loading state. You get real-time token-by-token rendering with zero custom WebSocket or EventSource code.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Tool Calling with the AI SDK
Define tools on the server with Zod schemas for type-safe parameter validation:
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
system: "You are a helpful assistant that can look up stock prices.",
messages,
tools: {
getStockPrice: tool({
description: "Get the current stock price for a ticker symbol",
parameters: z.object({
symbol: z.string().describe("Stock ticker symbol, e.g., AAPL"),
}),
execute: async ({ symbol }) => {
// Call your real stock API here
const price = await fetchStockPrice(symbol);
return { symbol, price, currency: "USD" };
},
}),
},
maxSteps: 5, // Allow up to 5 tool call rounds
});
return result.toDataStreamResponse();
}
The maxSteps parameter controls how many tool-call-then-continue rounds the model can perform. The AI SDK automatically feeds tool results back to the model and continues the conversation.
Multi-Provider Support
Switching between providers requires changing only the model import:
import { anthropic } from "@ai-sdk/anthropic";
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
messages,
});
Every provider adapter conforms to the same interface, so your tool definitions, hooks, and streaming logic remain identical.
Generative UI with React Server Components
The AI SDK's RSC integration lets you stream React components from the server:
import { streamUI } from "ai/rsc";
const result = await streamUI({
model: openai("gpt-4o"),
messages,
tools: {
showWeather: {
description: "Display weather for a city",
parameters: z.object({ city: z.string() }),
generate: async function* ({ city }) {
yield <LoadingSpinner />;
const data = await getWeather(city);
return <WeatherCard city={city} data={data} />;
},
},
},
});
Instead of returning JSON that the client renders, you stream actual React components. The loading spinner appears immediately, then gets replaced by the weather card once the data arrives.
FAQ
How does the Vercel AI SDK differ from using the OpenAI SDK directly?
The OpenAI SDK gives you raw API access — you handle streaming, state management, and UI rendering yourself. The Vercel AI SDK adds a framework layer on top: React hooks for conversation state, a data stream protocol for efficient client-server communication, and abstractions for tool calling and multi-step agent loops. Use the OpenAI SDK for backend-only scripts; use the Vercel AI SDK for web applications.
Can I use the Vercel AI SDK without Next.js?
Yes. The core streaming functions (streamText, generateText) work in any Node.js environment. The React hooks work with any React framework. However, the RSC integration (streamUI) requires a framework that supports React Server Components, such as Next.js.
How do I handle errors in streaming responses?
The useChat hook accepts an onError callback. On the server side, wrap your streamText call in a try-catch and return appropriate HTTP error responses. The SDK also supports the onFinish callback for logging completed conversations and token usage.
#VercelAISDK #React #Nextjs #Streaming #UseChat #GenerativeUI #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.