Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

streamObject with @ai-sdk/anthropic only provides a full object once finished via .toTextStreamResponse() #2105

Closed
Evgastap opened this issue Jun 26, 2024 · 6 comments
Assignees

Comments

@Evgastap
Copy link

Description

Following this tutorial, using @ai-sdk/openai as the provider functions as expected - the object response is streamed to the client.

However, swapping it with @ai-sdk/anthropic seems to only return the object to the client once the generation has completed.

Code example

const result = await streamObject({
model: anthropic("claude-3-5-sonnet-20240620"),
schema: notificationSchema,
...

Additional context

No response

@lgrammel lgrammel self-assigned this Jun 26, 2024
@lgrammel lgrammel added enhancement New feature or request ai/provider labels Jun 26, 2024
@lgrammel
Copy link
Collaborator

lgrammel commented Jun 26, 2024

Currently not supported.

Anthropic now supports https://docs.anthropic.com/en/api/messages-streaming#input-json-delta which could be used.

@Evgastap
Copy link
Author

Thanks for the fast reply! For extra context, throwing in mode: "json" does trigger an error on the anthropic provider, but leaving out that line, in the end, still returns an object. I guess that's expected bevahiour.

@lgrammel
Copy link
Collaborator

lgrammel commented Jun 26, 2024

Right now it is. I'll look into supporting input-json-delta (it was not available when the provider was first implemented)

@lgrammel lgrammel reopened this Jun 26, 2024
@lgrammel
Copy link
Collaborator

I just checked and it is supported in the AI SDK: https://github.com/vercel/ai/blob/main/packages/anthropic/src/anthropic-messages-language-model.ts#L323

However, Anthropic waits and then streams all deltas really quickly, giving the impression that it only provides a full object.

@lgrammel lgrammel removed the enhancement New feature or request label Jun 26, 2024
@lgrammel
Copy link
Collaborator

Example with delay that shows the tool call is streamed:

import { anthropic } from '@ai-sdk/anthropic';
import { streamObject } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';

dotenv.config();

async function main() {
  const result = await streamObject({
    model: anthropic('claude-3-5-sonnet-20240620'),
    maxTokens: 2000,
    schema: z.object({
      characters: z.array(
        z.object({
          name: z.string(),
          class: z
            .string()
            .describe('Character class, e.g. warrior, mage, or thief.'),
          description: z.string(),
        }),
      ),
    }),
    prompt:
      'Generate 3 character descriptions for a fantasy role playing game.',
  });

  for await (const partialObject of result.partialObjectStream) {
    // delay 50 ms to simulate processing time:
    await new Promise(resolve => setTimeout(resolve, 50));

    console.clear();
    console.log(partialObject);
  }
}

main().catch(console.error);

@holdenmatt
Copy link

I found this issue while trying to diagnose long delays with Claude streaming + tool use.

I did some testing and confirmed this is an issue with the Claude api, not the 'ai' sdk.

Some test code here if helpful to anyone else who runs into this: anthropics/anthropic-sdk-typescript#529

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants