Generative UI with AI Agents: Dynamically Creating React Components from Natural Language
Explore how the Vercel AI SDK's generativeUI capability lets AI agents stream fully interactive React components to users, replacing static text responses with dynamic, data-rich interfaces.
Beyond Text: Why Agents Should Render UI
Traditional chatbots return plain text or markdown. When a user asks "show me my sales data for Q1," they get a text table at best. Generative UI flips this model — the agent returns actual React components: interactive charts, filterable tables, clickable cards. The user gets a rich application experience generated on demand from natural language.
The Vercel AI SDK pioneered this pattern with its streamUI function, which lets server-side agent logic stream React Server Components directly to the client. The LLM decides which component to render and with what props, and the framework handles serialization, streaming, and hydration.
How Generative UI Works
The architecture involves three layers: the LLM decides what to render, server actions produce the React component tree, and the client renders the streamed components progressively.
// app/actions.tsx
import { streamUI } from "ai/rsc";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
// Define the tools that return React components
export async function agentChat(userMessage: string) {
const result = await streamUI({
model: openai("gpt-4o"),
system: "You are a data analyst assistant. Use tools to show visual components.",
messages: [{ role: "user", content: userMessage }],
tools: {
showBarChart: {
description: "Display a bar chart for the given data",
parameters: z.object({
title: z.string(),
data: z.array(z.object({
label: z.string(),
value: z.number(),
})),
}),
generate: async function* ({ title, data }) {
yield <div className="animate-pulse h-48 bg-gray-200 rounded" />;
// Simulate data processing
return <BarChart title={title} data={data} />;
},
},
showMetricCard: {
description: "Display a KPI metric card",
parameters: z.object({
label: z.string(),
value: z.string(),
change: z.number(),
}),
generate: async function* ({ label, value, change }) {
yield <div className="animate-pulse h-24 bg-gray-200 rounded" />;
return <MetricCard label={label} value={value} change={change} />;
},
},
},
});
return result.value;
}
The generate function is an async generator. It yields a loading skeleton immediately, then returns the final component. The client sees the skeleton first, then the fully rendered component — progressive rendering with zero layout shift.
Building the React Components
Each component is a standard React component. The agent fills in the props based on its reasoning about the user request.
// components/BarChart.tsx
interface BarChartProps {
title: string;
data: { label: string; value: number }[];
}
function BarChart({ title, data }: BarChartProps) {
const max = Math.max(...data.map(d => d.value));
return (
<div className="p-4 border rounded-lg">
<h3 className="text-lg font-semibold mb-4">{title}</h3>
<div className="space-y-2">
{data.map((item) => (
<div key={item.label} className="flex items-center gap-2">
<span className="w-20 text-sm">{item.label}</span>
<div className="flex-1 bg-gray-100 rounded">
<div
className="h-6 bg-blue-500 rounded"
style={{ width: `${(item.value / max) * 100}%` }}
/>
</div>
<span className="text-sm font-medium">{item.value}</span>
</div>
))}
</div>
</div>
);
}
Client-Side Integration
On the client, you call the server action and render whatever component stream comes back.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
// app/page.tsx
"use client";
import { useState } from "react";
import { agentChat } from "./actions";
export default function Chat() {
const [messages, setMessages] = useState<React.ReactNode[]>([]);
const [input, setInput] = useState("");
async function handleSubmit() {
const component = await agentChat(input);
setMessages((prev) => [...prev, component]);
setInput("");
}
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4">
{messages.map((msg, i) => (
<div key={i}>{msg}</div>
))}
</div>
<form onSubmit={(e) => { e.preventDefault(); handleSubmit(); }}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask about your data..."
className="w-full p-2 border rounded"
/>
</form>
</div>
);
}
When the user types "show me revenue by quarter," the LLM calls showBarChart with the appropriate data, and a fully interactive bar chart appears in the chat — not a text description of one.
Adding Interactive Components
Generative UI shines when components are interactive. A rendered table can have sort buttons. A chart can have filters. The agent generates the initial state, and React handles the interactivity.
showDataTable: {
description: "Display a sortable data table",
parameters: z.object({
columns: z.array(z.string()),
rows: z.array(z.array(z.string())),
}),
generate: async function* ({ columns, rows }) {
yield <p>Loading table...</p>;
return <SortableTable columns={columns} rows={rows} />;
},
},
The SortableTable component is a client component with useState for sort state — the agent does not need to know about the interactivity. It just provides the data.
When to Use Generative UI vs. Structured Output
Use structured output (JSON) when the client already has the components built and just needs data. Use generative UI when you want the agent to decide which component to show. If your agent might respond with a chart, a table, a form, or a card depending on context, generative UI lets the model make that rendering decision.
FAQ
Does generative UI work with non-OpenAI models?
Yes. The Vercel AI SDK supports any model provider that implements its model interface. Anthropic, Google, Mistral, and local models via Ollama all work with streamUI. The tool-calling capability of the model is what matters — it needs to reliably produce structured parameters for your component tools.
How do you handle errors when the LLM generates invalid component props?
The Zod schema validation in the tool parameters catches malformed props before the generate function runs. If the LLM passes an invalid value, the SDK returns a validation error that you can catch and display as a fallback component. Always define strict schemas with sensible defaults.
Can generative UI components trigger further agent interactions?
Absolutely. Components can include buttons or forms that call additional server actions. A rendered search result card could have a "deep dive" button that triggers another streamUI call, creating a multi-turn visual conversation where each step renders progressively richer interfaces.
#GenerativeUI #VercelAI #ReactComponents #AIAgents #TypeScript #StreamingUI #ServerComponents #NextJS
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.