Vercel AI SDK - Unified AI model integration with streaming and React hooks
Recipe
npm install ai @ai-sdk/openai @ai-sdk/anthropic// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}// app/chat/page.tsx
"use client";
import { useChat } from "@ai-sdk/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}When to reach for this: You need to integrate LLM chat, completions, or tool calling into a React or Next.js app with streaming support and minimal boilerplate.
Working Example
// app/components/ChatWithTools.tsx
"use client";
import { useChat } from "@ai-sdk/react";
export default function ChatWithTools() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: "/api/chat-tools",
onError: (error) => console.error("Chat error:", error),
});
return (
<div className="mx-auto max-w-2xl p-4">
<div className="space-y-4">
{messages.map((message) => (
<div
key={message.id}
className={`rounded-lg p-3 ${
message.role === "user" ? "bg-blue-100 ml-auto" : "bg-gray-100"
}`}
>
<p className="text-sm font-medium">{message.role}</p>
<p>{message.content}</p>
{message.toolInvocations?.map((tool, i) => (
<pre key={i} className="mt-2 text-xs bg-gray-200 p-2 rounded">
{JSON.stringify(tool, null, 2)}
</pre>
))}
</div>
))}
</div>
<form onSubmit={handleSubmit} className="mt-4 flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask a question..."
className="flex-1 border rounded px-3 py-2"
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading}
className="bg-blue-600 text-white px-4 py-2 rounded disabled:opacity-50"
>
{isLoading ? "Thinking..." : "Send"}
</button>
</form>
</div>
);
}// app/api/chat-tools/route.ts
import { streamText, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
system: "You are a helpful assistant that can look up weather.",
messages,
tools: {
getWeather: tool({
description: "Get the current weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
// Replace with real API call
return { location, temperature: 72, condition: "sunny" };
},
}),
},
maxSteps: 5,
});
return result.toDataStreamResponse();
}What this demonstrates:
- Full streaming chat UI with
useChathook - Tool calling with Zod schema validation
- Multi-step tool execution with
maxSteps - Loading state management and error handling
- Anthropic Claude model usage
Deep Dive
How It Works
- The AI SDK provides a unified interface across providers (OpenAI, Anthropic, Google, Mistral, etc.) so you can swap models by changing one line
streamTextreturns aStreamTextResultthat converts to aResponseviatoDataStreamResponse(), sending Server-Sent Events to the clientuseChatmanages the full conversation state: message history, input state, loading, error, and abort controller- Tool calling uses Zod schemas for parameter validation; the SDK automatically handles the tool call/result loop up to
maxStepsiterations generateTextis the non-streaming counterpart for batch processing or server-side generation- The SDK uses a Data Stream Protocol that encodes text deltas, tool calls, tool results, and finish reasons into a single stream
Variations
Using OpenRouter for model access:
npm install @openrouter/ai-sdk-providerimport { createOpenRouter } from "@openrouter/ai-sdk-provider";
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter("anthropic/claude-sonnet-4-20250514"),
messages,
});Non-streaming generation:
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const { text, usage } = await generateText({
model: openai("gpt-4o"),
prompt: "Summarize this article in 3 bullets.",
});Structured output with generateObject:
import { generateObject } from "ai";
import { z } from "zod";
const { object } = await generateObject({
model: openai("gpt-4o"),
schema: z.object({
recipe: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
prompt: "Generate a recipe for chocolate chip cookies.",
});useCompletion for single-turn prompts:
"use client";
import { useCompletion } from "@ai-sdk/react";
export default function Completion() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: "/api/completion",
});
return (
<div>
<p>{completion}</p>
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}TypeScript Notes
Messagetype fromaiincludesid,role,content, and optionaltoolInvocations- Tool parameters are inferred from Zod schemas, giving full type safety in
executefunctions useChatreturns strongly typedmessages: Message[]- Provider-specific options can be passed via the
providerOptionsfield
import type { Message } from "ai";
// Tool result types are inferred from the Zod schema
const weatherTool = tool({
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
// city is typed as string
return { temp: 72 }; // return type is inferred
},
});Gotchas
-
Missing API keys — The SDK throws at runtime if the provider's API key env var is not set. Fix: Set
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc. in.env.local. Each provider has its own expected env var name. -
useChat not updating — If messages don't stream in, the API route may not be returning a data stream response. Fix: Always return
result.toDataStreamResponse(), notresult.textor a plain JSON response. -
Tool calls not executing — Tools defined but
maxStepsnot set defaults to 1, so multi-step tool use won't work. Fix: SetmaxSteps: 5(or higher) to allow the model to process tool results and continue. -
CORS errors in development — API routes must be in the same Next.js app. Fix: Use Next.js API routes (
app/api/) rather than calling an external server directly fromuseChat. -
Large bundle from provider imports — Importing multiple providers increases client bundle size. Fix: Provider imports are server-only; keep them in API routes or server actions, never in
"use client"files.
Alternatives
| Library | Best For | Trade-off |
|---|---|---|
| Vercel AI SDK | Full-stack React/Next.js AI apps | Opinionated, tied to Vercel ecosystem conventions |
| LangChain.js | Complex chains, RAG, agents | Heavier abstraction, steeper learning curve |
| OpenAI SDK directly | Simple OpenAI-only usage | No streaming React hooks, single provider |
| Anthropic SDK directly | Simple Claude-only usage | No streaming React hooks, single provider |
| LlamaIndex.ts | Data indexing and retrieval | Focused on RAG, less on chat UI |
FAQs
What does streamText return and how does it reach the client?
streamTextreturns aStreamTextResultobject- Call
.toDataStreamResponse()to convert it into aResponsewith Server-Sent Events - The
useChathook on the client consumes this stream and updates messages in real time - Never return
result.textor plain JSON -- the hook expects the data stream protocol
How do I swap between OpenAI and Anthropic models?
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
// Just change the model line:
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
messages,
});The unified interface means only the model parameter changes.
What is maxSteps and why do my tool calls not execute without it?
maxStepscontrols how many tool-call-and-result rounds the model can perform- Default is 1, meaning the model can call a tool but cannot process its result
- Set
maxSteps: 5or higher for multi-step tool use - Each step = one model turn (text or tool call) followed by one tool result
How do I define a tool with Zod schema validation?
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
}),
execute: async ({ city }) => {
return { temp: 72, condition: "sunny" };
},
});Tool parameter types are inferred from the Zod schema automatically.
Gotcha: Why are messages not streaming in my useChat component?
- The API route must return
result.toDataStreamResponse(), not a JSON response - Verify the
apiprop inuseChatpoints to the correct route - Check browser devtools for errors -- missing API keys cause server-side throws
- Ensure the route is in the same Next.js app to avoid CORS issues
What is the difference between streamText and generateText?
streamTextsends tokens to the client as they are generated (streaming)generateTextwaits for the full response and returns it at once- Use
streamTextfor chat UIs where you want real-time token display - Use
generateTextfor batch processing or server-side generation where streaming is not needed
How do I generate structured JSON output instead of free-form text?
import { generateObject } from "ai";
import { z } from "zod";
const { object } = await generateObject({
model: openai("gpt-4o"),
schema: z.object({
title: z.string(),
tags: z.array(z.string()),
}),
prompt: "Generate metadata for a blog post about React.",
});How do I use useCompletion for single-turn prompts instead of multi-turn chat?
useCompletionmanages a single prompt/response cycle, not a conversation- It returns
completion(the response text),input, and form handlers - Point
apito a route that usesstreamTextwith apromptinstead ofmessages
Gotcha: Why does importing a provider in a client component bloat my bundle?
- Provider packages (
@ai-sdk/openai,@ai-sdk/anthropic) are server-only - Importing them in a
"use client"file bundles them into the client JavaScript - Keep all provider imports in API routes or Server Actions
useChatanduseCompletionare the only AI SDK imports safe for client components
How are the TypeScript types for tool parameters and useChat messages defined?
- Tool parameters are inferred from Zod schemas -- no manual types needed
useChatreturnsmessages: Message[]whereMessageincludesid,role,content, and optionaltoolInvocations- Import
Messagefrom"ai"if you need to type props that accept messages
import type { Message } from "ai";How do I use OpenRouter as a provider for model access?
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
const result = streamText({
model: openrouter("anthropic/claude-sonnet-4-20250514"),
messages,
});Related
- TanStack Query — Cache and manage AI response data
- Next.js Server Actions — Alternative to API routes for AI calls
- TypeScript React — Type patterns used with AI SDK