-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream Chat LLM Token By Token is not working #78
Comments
This seems to be a bug introduced in the Although, given that example, the |
@elliottshort So, what LangChain version do you recommend? with LangGraph Because I tried to use LangGraph |
Hey @ahmedmoorsy, which python version are you using? I had a similar issue when running on python 3.10 but the streaming worked once I upgraded to 3.12 (3.11 should also work, I think). |
@SavvasMohito That is interesting. I've been having the same issues with streaming using python 3.10.11. The only way I've been able to get around it is by modifying the wrapper functions. See the other LLM token thread for streaming with astream_log - I just converted the call_model wrapper function to an LCEL chain and it worked fine. The same approach works with astream_events to get the correct output in the example notebook. I'm sure there's a more elegant way to do this, specifically to pass through the tool name, but this does work at least with python 3.10.11:
Then instead of call_model as a node just use model:
|
Nice one @tgram-3D. Does streaming follow-up questions work for you? I can get responses streamed in my initial prompt but when I perform a second question, it never returns anything. It's not that I am not capturing it, it seems like langgraph never exits the model invocation loop. |
@SavvasMohito I've just been playing around with the example streaming-tokens notebook with astream_events, and follow up questions seem to work fine using the node modifications described above. The notebook isn't really set up for memory though since all it does is add a single HumanMessage for each streaming event:
The way I'm currently using it for follow up questions is pretty basic:
I'm sure there's a better way to do this sort of thing. Check out the new Persistence notebook. I currently cannot get astream_events to work with the persistence config at all though:
I tried this and got zero output, same deal without the configurable:
I'm sure there's a way to combine the two features, but the simple class method should work fine for frontend integration with websockets, etc. |
Thank you! Upgrading from 3.10.13 to 3.12.2 fixed this demo for me |
I was struggling with tokens streaming in this tutorial. And I have found another workaround to enable streaming. I just added extra parameter
That way I was able to fetch tokens stream from llm. |
Not clear on how, but dmitryrPlanner5D's comment solved this for me too. I'm using python 3.10. |
What are you guys passing in the config? @simonrmonk @dmitryrPlanner5D |
@OSS-GR |
Streaming is also not working if I update the agent_supervisor notebook as specified in the documention (set
For example, here's the log from a slightly tweaked version of the notebook. In this example, the supervisor should output
|
I found the issue and the solution. |
re: #78 (comment) The reason explicit config passing is needed in python versions < 3.11 is that asyncio didn't add context support until 3.11 |
Closing issue. You must either pass config explicitly when using python <3.11 or upgrade to python 3.11 starting from which we automatically do that on behalf of the user. The config object contains the callbacks which are necessary for astream events and astream log to get information about the streaming tokens from the llm! |
Checked other resources
Example Code
I am using the code in this notebook:
https://github.com/langchain-ai/langgraph/blob/main/examples/streaming-tokens.ipynb
streaming code:
Error Message and Stack Trace (if applicable)
the streaming is not working, I am not receiving any output from this part:
Description
the streaming is not working, I am not receiving any output
System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langgraph==0.0.21
The text was updated successfully, but these errors were encountered: