Fix: Vercel AI SDK Not Working — Streaming Not Rendering, useChat Stuck Loading, or Provider Errors
Quick Answer
How to fix Vercel AI SDK issues — useChat and useCompletion hooks, streaming responses with streamText, provider configuration for OpenAI and Anthropic, tool calling, and Next.js integration.
The Problem
useChat sends the message but the response never appears:
'use client';
import { useChat } from 'ai/react';
function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
// Message sends, loading flickers, but no assistant response appears
}Or the streaming response throws:
Error: Failed to parse stream — or —
Error: AI_APICallError: 401 Incorrect API key providedOr streamText works in the API route but the client receives nothing:
// app/api/chat/route.ts
const result = streamText({ model: openai('gpt-4o'), messages });
return result.toDataStreamResponse();
// Client gets a 200 but no streamed contentWhy This Happens
The Vercel AI SDK has two layers — a server-side core (ai) for calling AI providers and a client-side React layer (ai/react) for rendering streamed responses:
useChatexpects a specific streaming format — the API route must returnresult.toDataStreamResponse()(notresult.toTextStreamResponse()). The data stream format includes metadata thatuseChatneeds to parse messages correctly. Using the wrong format causes silent failures.- Provider packages are separate —
@ai-sdk/openai,@ai-sdk/anthropic,@ai-sdk/google, etc. must be installed individually. Theaicore package doesn’t include any provider. Callingopenai('gpt-4o')without@ai-sdk/openaithrows an import error. - API keys must be server-side only — keys are read from environment variables on the server.
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc. must be set in.env.local(not prefixed withNEXT_PUBLIC_). Client-side exposure of API keys is a security risk. useChatcalls/api/chatby default — if your API route is at a different path, passapi: '/api/my-chat-route'touseChat. A 404 response causes silent failure.
Fix 1: Basic Chat with useChat
npm install ai @ai-sdk/openai
# Or: npm install ai @ai-sdk/anthropic// app/api/chat/route.ts — server-side streaming
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a helpful assistant.',
messages,
maxTokens: 1000,
});
// IMPORTANT: use toDataStreamResponse for useChat compatibility
return result.toDataStreamResponse();
}// components/Chat.tsx — client-side
'use client';
import { useChat } from 'ai/react';
export function Chat() {
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
error,
reload,
stop,
} = useChat({
api: '/api/chat', // Default — can be omitted
// Optional: initial messages
initialMessages: [
{ id: '1', role: 'assistant', content: 'How can I help you?' },
],
// Callback when response completes
onFinish: (message) => {
console.log('Response complete:', message.content);
},
onError: (error) => {
console.error('Chat error:', error);
},
});
return (
<div>
{/* Message list */}
<div>
{messages.map(m => (
<div key={m.id} className={m.role === 'user' ? 'text-right' : 'text-left'}>
<strong>{m.role === 'user' ? 'You' : 'AI'}:</strong>
<p>{m.content}</p>
</div>
))}
</div>
{/* Error display */}
{error && (
<div className="text-red-500">
Error: {error.message}
<button onClick={() => reload()}>Retry</button>
</div>
)}
{/* Input form */}
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Type a message..."
disabled={isLoading}
/>
{isLoading ? (
<button type="button" onClick={stop}>Stop</button>
) : (
<button type="submit">Send</button>
)}
</form>
</div>
);
}Fix 2: Multiple Providers
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google// lib/ai.ts — provider setup
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
// Each provider reads its API key from environment variables:
// OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY
export const models = {
'gpt-4o': openai('gpt-4o'),
'gpt-4o-mini': openai('gpt-4o-mini'),
'claude-sonnet': anthropic('claude-sonnet-4-20250514'),
'claude-haiku': anthropic('claude-haiku-4-5-20251001'),
'gemini-pro': google('gemini-2.0-flash'),
} as const;
export type ModelId = keyof typeof models;// app/api/chat/route.ts — dynamic model selection
import { streamText } from 'ai';
import { models, type ModelId } from '@/lib/ai';
export async function POST(req: Request) {
const { messages, model: modelId } = await req.json();
const model = models[modelId as ModelId] ?? models['gpt-4o-mini'];
const result = streamText({
model,
messages,
maxTokens: 2000,
temperature: 0.7,
});
return result.toDataStreamResponse();
}// Client — pass model selection
'use client';
import { useChat } from 'ai/react';
function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
body: {
model: 'claude-sonnet', // Extra data sent with each request
},
});
return (/* ... */);
}Fix 3: Tool Calling (Function Calling)
// app/api/chat/route.ts
import { streamText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
getWeather: tool({
description: 'Get the current weather for a location',
parameters: z.object({
city: z.string().describe('The city name'),
unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
}),
execute: async ({ city, unit }) => {
// Call your weather API
const data = await fetch(
`https://api.weather.example.com/${city}?unit=${unit}`
).then(r => r.json());
return { temperature: data.temp, condition: data.condition, city };
},
}),
searchDocuments: tool({
description: 'Search internal documents',
parameters: z.object({
query: z.string().describe('Search query'),
limit: z.number().default(5),
}),
execute: async ({ query, limit }) => {
const results = await searchIndex(query, limit);
return results;
},
}),
},
maxSteps: 5, // Allow multi-step tool use
});
return result.toDataStreamResponse();
}// Client — render tool results
'use client';
import { useChat } from 'ai/react';
function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map(m => (
<div key={m.id}>
{m.role === 'user' && <p><strong>You:</strong> {m.content}</p>}
{m.role === 'assistant' && (
<div>
{/* Text content */}
{m.content && <p>{m.content}</p>}
{/* Tool invocations */}
{m.toolInvocations?.map((tool, i) => (
<div key={i} className="bg-gray-50 p-2 rounded text-sm">
<p>🔧 Called: {tool.toolName}</p>
{tool.state === 'result' && (
<pre>{JSON.stringify(tool.result, null, 2)}</pre>
)}
</div>
))}
</div>
)}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}Fix 4: generateText and generateObject (Non-Streaming)
// For single-shot generation (not streaming)
import { generateText, generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Generate text
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'Summarize the key features of TypeScript in 3 bullet points.',
});
// Generate structured data (JSON)
const { object } = await generateObject({
model: openai('gpt-4o'),
schema: z.object({
title: z.string(),
summary: z.string(),
tags: z.array(z.string()),
sentiment: z.enum(['positive', 'negative', 'neutral']),
}),
prompt: 'Analyze this product review: "Great battery life but the camera could be better."',
});
// object = { title: "Mixed Review", summary: "...", tags: [...], sentiment: "neutral" }
// Stream structured data
import { streamObject } from 'ai';
const result = streamObject({
model: openai('gpt-4o'),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a recipe for chocolate chip cookies.',
});
for await (const partialObject of result.partialObjectStream) {
console.log(partialObject); // Partial object updates as they stream in
}Fix 5: RAG (Retrieval-Augmented Generation)
// app/api/chat/route.ts — RAG pattern
import { streamText, embed } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const lastMessage = messages[messages.length - 1].content;
// 1. Generate embedding for the user's question
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: lastMessage,
});
// 2. Search your vector database
const relevantDocs = await vectorDb.search({
vector: embedding,
topK: 5,
});
// 3. Build context from retrieved documents
const context = relevantDocs
.map(doc => `[${doc.metadata.title}]: ${doc.content}`)
.join('\n\n');
// 4. Generate response with context
const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant. Answer based on the following context. If the context doesn't contain the answer, say so.\n\nContext:\n${context}`,
messages,
});
return result.toDataStreamResponse();
}Fix 6: useCompletion (Text Completion)
// For single-prompt completion (not chat)
// app/api/completion/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: openai('gpt-4o-mini'),
prompt,
});
return result.toDataStreamResponse();
}// Client
'use client';
import { useCompletion } from 'ai/react';
function TextGenerator() {
const {
completion,
input,
handleInputChange,
handleSubmit,
isLoading,
} = useCompletion({
api: '/api/completion',
});
return (
<div>
<form onSubmit={handleSubmit}>
<textarea
value={input}
onChange={handleInputChange}
placeholder="Write a prompt..."
/>
<button type="submit" disabled={isLoading}>
{isLoading ? 'Generating...' : 'Generate'}
</button>
</form>
<div className="whitespace-pre-wrap">{completion}</div>
</div>
);
}Still Not Working?
Stream connects but no text appears in useChat — you’re likely using toTextStreamResponse() instead of toDataStreamResponse(). useChat requires the data stream format which includes message metadata. toTextStreamResponse() is for raw text streaming without the React hooks.
401 or 403 from the AI provider — the API key isn’t set or is invalid. Check that OPENAI_API_KEY (not NEXT_PUBLIC_OPENAI_API_KEY) is in .env.local. The provider reads the key automatically from the environment. If you need a custom key, pass it explicitly: openai('gpt-4o', { apiKey: process.env.MY_KEY }).
useChat sends but response is empty, no error — check the browser’s Network tab for the /api/chat response. If the response body is empty or the status is not 200, the API route has an error. Add a try/catch in your route handler and return a proper error response.
Tool calls work but the AI doesn’t use the tool result — set maxSteps to a value greater than 1. By default, the AI makes one step. With tools, it needs at least 2 steps: one to call the tool and one to respond with the result. Set maxSteps: 5 for complex multi-tool interactions.
For related API and backend issues, see Fix: Next.js App Router Not Working and Fix: Hono Not Working.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Clerk Not Working — Auth Not Loading, Middleware Blocking, or User Data Missing
How to fix Clerk authentication issues — ClerkProvider setup, middleware configuration, useUser and useAuth hooks, server-side auth, webhook handling, and organization features.
Fix: next-safe-action Not Working — Action Not Executing, Validation Errors Missing, or Type Errors
How to fix next-safe-action issues — action client setup, Zod schema validation, useAction and useOptimisticAction hooks, middleware, error handling, and authorization patterns.
Fix: nuqs Not Working — URL State Not Syncing, Type Errors, or Server Component Issues
How to fix nuqs URL search params state management — useQueryState and useQueryStates setup, parsers, server-side access, shallow routing, history mode, and Next.js App Router integration.
Fix: NextAuth.js Not Working — Session Null, Callback Errors, or OAuth Redirect Issues
How to fix NextAuth.js (Auth.js) issues — session undefined in server components, OAuth callback URL mismatch, JWT vs database sessions, middleware protection, and credentials provider.