Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore (ai/core): streamText returns result immediately (no Promise) #3658

Merged
merged 7 commits into from
Nov 13, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/tasty-dots-burn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': major
---

chore (ai/core): streamText returns result immediately (no Promise)
2 changes: 1 addition & 1 deletion content/docs/02-foundations/05-streaming.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ However, regardless of the speed of your model, the AI SDK is designed to make i
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

const { textStream } = await streamText({
const { textStream } = streamText({
model: openai('gpt-4-turbo'),
prompt: 'Write a poem about embedding models.',
});
Expand Down
6 changes: 3 additions & 3 deletions content/docs/02-getting-started/02-nextjs-app-router.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down Expand Up @@ -194,7 +194,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -316,7 +316,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down
6 changes: 3 additions & 3 deletions content/docs/02-getting-started/03-nextjs-pages-router.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down Expand Up @@ -194,7 +194,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -312,7 +312,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down
6 changes: 3 additions & 3 deletions content/docs/02-getting-started/04-svelte.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ const openai = createOpenAI({
export const POST = (async ({ request }) => {
const { messages } = await request.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down Expand Up @@ -187,7 +187,7 @@ const openai = createOpenAI({
export const POST = (async ({ request }) => {
const { messages } = await request.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -304,7 +304,7 @@ const openai = createOpenAI({
export const POST = (async ({ request }) => {
const { messages } = await request.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down
6 changes: 3 additions & 3 deletions content/docs/02-getting-started/05-nuxt.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ export default defineLazyEventHandler(async () => {
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down Expand Up @@ -202,7 +202,7 @@ export default defineLazyEventHandler(async () => {
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -325,7 +325,7 @@ export default defineLazyEventHandler(async () => {
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo-preview'),
messages,
tools: {
Expand Down
10 changes: 5 additions & 5 deletions content/docs/02-getting-started/06-nodejs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ async function main() {

messages.push({ role: 'user', content: userInput });

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down Expand Up @@ -171,7 +171,7 @@ async function main() {

messages.push({ role: 'user', content: userInput });

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -241,7 +241,7 @@ async function main() {

messages.push({ role: 'user', content: userInput });

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -311,7 +311,7 @@ async function main() {

messages.push({ role: 'user', content: userInput });

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -384,7 +384,7 @@ async function main() {

messages.push({ role: 'user', content: userInput });

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down
8 changes: 4 additions & 4 deletions content/docs/02-guides/01-rag-chatbot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
messages,
});
Expand Down Expand Up @@ -470,7 +470,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Only respond to questions using information from tool calls.
Expand Down Expand Up @@ -508,7 +508,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Only respond to questions using information from tool calls.
Expand Down Expand Up @@ -691,7 +691,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
messages,
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Expand Down
2 changes: 1 addition & 1 deletion content/docs/02-guides/02-multi-modal-chatbot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
Expand Down
4 changes: 2 additions & 2 deletions content/docs/02-guides/03-llama-3_1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ const groq = createGroq({
apiKey: process.env.GROQ_API_KEY,
});

const { textStream } = await streamText({
const { textStream } = streamText({
model: groq('llama-3.1-70b-versatile'),
prompt: 'What is love?',
});
Expand Down Expand Up @@ -227,7 +227,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: groq('llama-3.1-70b-versatile'),
system: 'You are a helpful assistant.',
messages,
Expand Down
4 changes: 2 additions & 2 deletions content/docs/02-guides/05-computer-use.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ console.log(response.text);
For streaming responses, use `streamText` to receive updates in real-time:

```ts
const result = await streamText({
const result = streamText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Open the browser and navigate to vercel.com',
tools: { computer: computerTool },
Expand All @@ -160,7 +160,7 @@ for await (const chunk of result.textStream) {
To allow the model to perform multiple steps without user intervention, specify a `maxSteps` value. This will automatically send any tool results back to the model to trigger a subsequent generation:

```ts highlight="5"
const stream = await streamText({
const stream = streamText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Open the browser and navigate to vercel.com',
tools: { computer: computerTool },
Expand Down
8 changes: 4 additions & 4 deletions content/docs/03-ai-sdk-core/05-generating-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ AI SDK Core provides the [`streamText`](/docs/reference/ai-sdk-core/stream-text)
```ts
import { streamText } from 'ai';

const result = await streamText({
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
});
Expand Down Expand Up @@ -108,7 +108,7 @@ It receives the following chunk types:
```tsx highlight="6-11"
import { streamText } from 'ai';

const result = await streamText({
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onChunk({ chunk }) {
Expand All @@ -128,7 +128,7 @@ It contains the text, usage information, finish reason, and more:
```tsx highlight="6-8"
import { streamText } from 'ai';

const result = await streamText({
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onFinish({ text, finishReason, usage }) {
Expand All @@ -147,7 +147,7 @@ Here is an example of how to use the `fullStream` property:
import { streamText } from 'ai';
import { z } from 'zod';

const result = await streamText({
const result = streamText({
model: yourModel,
tools: {
cityAttractions: {
Expand Down
2 changes: 1 addition & 1 deletion content/docs/03-ai-sdk-core/45-middleware.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ const wrappedLanguageModel = wrapLanguageModel({
The wrapped language model can be used just like any other language model, e.g. in `streamText`:

```ts highlight="2"
const result = await streamText({
const result = streamText({
model: wrappedLanguageModel,
prompt: 'What cities are in the United States?',
});
Expand Down
4 changes: 2 additions & 2 deletions content/docs/03-ai-sdk-core/50-error-handling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ You can handle these errors using the `try/catch` block.
import { generateText } from 'ai';

try {
const { textStream } = await streamText({
const { textStream } = streamText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
Expand All @@ -56,7 +56,7 @@ happen outside of the streaming.
import { generateText } from 'ai';

try {
const { fullStream } = await streamText({
const { fullStream } = streamText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
Expand Down
2 changes: 1 addition & 1 deletion content/docs/03-ai-sdk-core/55-testing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ const result = await generateText({
import { streamText } from 'ai';
import { simulateReadableStream, MockLanguageModelV1 } from 'ai/test';

const result = await streamText({
const result = streamText({
model: new MockLanguageModelV1({
doStream: async () => ({
stream: simulateReadableStream({
Expand Down
6 changes: 3 additions & 3 deletions content/docs/04-ai-sdk-ui/02-chatbot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
system: 'You are a helpful assistant.',
messages,
Expand Down Expand Up @@ -376,7 +376,7 @@ import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
messages,
});
Expand Down Expand Up @@ -412,7 +412,7 @@ import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
messages,
});
Expand Down
6 changes: 3 additions & 3 deletions content/docs/04-ai-sdk-ui/03-chatbot-with-tool-calling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down Expand Up @@ -201,7 +201,7 @@ You can stream tool calls while they are being generated by enabling the
export async function POST(req: Request) {
// ...

const result = await streamText({
const result = streamText({
experimental_toolCallStreaming: true,
// ...
});
Expand Down Expand Up @@ -252,7 +252,7 @@ import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
Expand Down
4 changes: 2 additions & 2 deletions content/docs/04-ai-sdk-ui/04-generative-user-interfaces.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ import { streamText } from 'ai';
export async function POST(request: Request) {
const { messages } = await request.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a friendly assistant!',
messages,
Expand Down Expand Up @@ -120,7 +120,7 @@ import { tools } from '@/ai/tools';
export async function POST(request: Request) {
const { messages } = await request.json();

const result = await streamText({
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a friendly assistant!',
messages,
Expand Down
Loading
Loading