-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Error: [object Object]" during message streaming when error is via an SSE (cause/detail not accessible) #346
Comments
Thanks for the report, we'll take a look! |
Just a heads up, we have also been able to replicate this issue. This is running within a Lambda; the error occurs after a few hundred tokens. An example prompt which seems to replicate this for us is: APIConnectionError: Connection error.
at Function.generate (file:///var/task/node_modules/@anthropic-ai/sdk/error.mjs:32:20)
at Stream.iterator (file:///var/task/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
... 2 lines matching cause stack trace ...
at async MessageStream._createMessage (file:///var/task/node_modules/@anthropic-ai/sdk/lib/MessageStream.mjs:113:26) {
status: undefined,
headers: undefined,
error: undefined,
cause: Error: [object Object]
at castToError (file:///var/task/node_modules/@anthropic-ai/sdk/core.mjs:682:12)
at Function.generate (file:///var/task/node_modules/@anthropic-ai/sdk/error.mjs:32:52)
at Stream.iterator (file:///var/task/node_modules/@anthropic-ai/sdk/streaming.mjs:52:40)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async MessageStream._createMessage (file:///var/task/node_modules/@anthropic-ai/sdk/lib/MessageStream.mjs:113:26)
} |
Running into the exact same issue here when running it on Vercel using Vercel AI SDK. |
FWIW, my guess is that this is due to Vercel timing out your handler, but I agree the error message being hard to read makes this worse. @RobertCraigie care to ticket? |
Thanks! I'm on a Pro plan with Vercel for 5 minute timeouts so I don't think that's actually the case for me |
@rattrayalex fwiw, and I mentioned above, we have seen this error in plain old AWS Lambda, and have observed this not to be related to Lambda timeouts. (Just for my own edification, what's the relationship here with @stainless-api?) |
Gotcha, that's helpful. We'll try to look into this, but a repro script would be very helpful. Can anyone share one?
I work at Stainless, which Anthropic uses to build their SDKs. |
I am seeing this too. I'm running a NextJS app locally. Just randomly chatting with my app it throws this error maybe every 5 - 10 requests. The app has been working fine with Together AI's API (via the OpenAI SDK) using Llama 3 and 3.1 in the last few months. Since swapping over to Anthropic I'm now seeing this intermittent issue. This is the output when the error is thrown:
|
Could do something like this to serialize the object as JSON for use as the error message. Not an ideal fix but at least we'd be able to see what the error is. |
We're also seeing this issue (using |
@greg84 @jbergs-dsit (or anyone else on this thread) could you please provide a codesandbox or similar which reproduces the error? |
It should replicate for you using this repo (see comment above): https://github.com/beginner-corp/claude-begin-demo Note: you don't need to deploy to Begin to replicate, just run the local sandbox with |
Thank you @ryanblock, we'll take a look soon! |
I have not been able to consistently reproduce this. It happens when the API returns an error to a streaming response. We have seen it during times of instability, where the API was returning 500 or overloaded errors. Please read the original comment from paulcalcraft, it describes exactly what is happening: Just need to extract some useful detail from errJson before the error is thrown. |
EDIT: we're working on a fix for this internally. |
@rattrayalex that appears to be a private repo? |
|
Sorry it looks like it was closed prematurely before this commit was pushed. |
This fix was released in |
When hitting an error during the async iterator of a
anthropic.messages.create()
, the exception raised and associated error object doesn't have any detail, it just has an e.cause.message set to"[object Object]"
.My example SSE that's occurring during streaming is:
The error SSE is then thrown using APIError.generate here:
anthropic-sdk-typescript/src/streaming.ts
Line 95 in ad92b0d
The errJSON is correctly being passed to generate, but because status isn't set (it's an SSE, not an HTTP response), we use castToError() to raise the APIConnectionError, with no other info:
anthropic-sdk-typescript/src/error.ts
Line 52 in ad92b0d
castToError just returns new Error(errJSON)
anthropic-sdk-typescript/src/core.ts
Line 977 in ad92b0d
But errJSON doesn't have a good toString, so our cause Error object has message "[object Object]" and no other properties. This means you can't handle/inspect the error cause correctly when catching the error during the async iterator.
An example error:
And where I'm catching it:
Would it be possible to correctly format the error so that it's possible to identify at least the error type by inspecting .cause on the APIConnectionError?
Thanks for any help. Also happy to submit a PR if there's agreement on the best way to surface the error detail in the APIConnectionError object.
The text was updated successfully, but these errors were encountered: