Cursorcursorintermediate

Add OpenAI Integration to Next.js (Cursor Prompt)

Cursor prompt to add OpenAI API integration with streaming responses using the Vercel AI SDK.

What you'll get

OpenAI integration with streaming chat, Vercel AI SDK, chat UI, and conversation history.

The Prompt

Add OpenAI integration with streaming responses to this Next.js app.

FILES TO CREATE:
- src/lib/openai.ts — OpenAI client initialization
- src/app/api/chat/route.ts — Streaming chat endpoint using Vercel AI SDK
- src/app/api/generate/route.ts — Non-streaming generation endpoint
- src/components/ai/ChatInterface.tsx — Chat UI with message history
- src/components/ai/GenerateButton.tsx — One-click generation component

IMPLEMENTATION:
1. Install ai (Vercel AI SDK) and @ai-sdk/openai.
2. In openai.ts, create the OpenAI provider using createOpenAI with OPENAI_API_KEY.
3. Chat route uses streamText() from 'ai' with model 'gpt-4o-mini'. Accept messages array and optional system prompt. Return the stream using toDataStreamResponse().
4. Generate route uses generateText() for one-shot completions (summaries, descriptions, etc.).
5. ChatInterface uses useChat() hook from 'ai/react'. Display messages with user/assistant avatars, markdown rendering for assistant messages, typing indicator, and a text input with send button.
6. Add rate limiting: max 20 requests per minute per user.
7. Store conversation history in Supabase if the user is authenticated.

DO NOT:
- Use the OpenAI SDK directly — use Vercel AI SDK for the abstraction layer
- Stream directly from client to OpenAI — always proxy through your API route
- Expose OPENAI_API_KEY to the client

ENVIRONMENT VARIABLES:
- OPENAI_API_KEY

Replace these variables

VariableReplace with
[AI_MODEL]OpenAI model to use (e.g., gpt-4o-mini, gpt-4o)
[SYSTEM_PROMPT]Default system prompt for the AI assistant

Tips for best results

Use gpt-4o-mini for most tasks — it's 15x cheaper than gpt-4o with comparable quality for chat.

Always proxy through your API route so you can add rate limiting and logging.

Follow-up prompts

Add RAG

Add retrieval-augmented generation by embedding your content with OpenAI, storing vectors in Supabase pgvector, and including relevant context in prompts.

Related prompts