Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include time_out in the answer_generator run method #3947

Closed
rolandtannous opened this issue Jan 25, 2023 · 3 comments · Fixed by #3949
Closed

Include time_out in the answer_generator run method #3947

rolandtannous opened this issue Jan 25, 2023 · 3 comments · Fixed by #3949
Labels

Comments

@rolandtannous
Copy link

Currently the OpenAiAnswerGenerator class doesn't accept timeout as a parameter, if someone is trying to use it together with a retriever in a pipeline.run() call.

The timeout parameter appears to be only malleable from a standalone generator.predict() call of the OpenAIAnswerGenerator. It would be great if we can pass the timeout parameter when initializing the OpenAIAnswerGenerator or from inside pipeline.run as a parameter.

Additional context
Appears related to this other feature request #3797

This came up after I started receiving timeout from the openAi server. It was working fine but then started timing out. It has been mentioned in this discord thread

@TuanaCelik TuanaCelik added type:feature New feature or request topic:pipeline labels Jan 25, 2023
@danielbichuetti
Copy link
Contributor

As mentioned in Discord, I think that for timeout it doesn't make sense allowing to change it on each Pipeline run. It's not usual at least.

About the #3797, I can imagine some scenarios where it would be useful to set up max_tokens in different requests.

@bogdankostic
Copy link
Contributor

In #3886 we added the environment variables HAYSTACK_REMOTE_API_BACKOFF_SEC and HAYSTACK_REMOTE_API_MAX_RETRIES which are currently only used in PromptNode. We should definitely use them also in the @retry_with_exponential_backoff decorator of the embed method of _OpenAIEmbeddingEncoder and _CohereEmbeddingEncoder and predict method of OpenAIAnswerGenerator.

Would someone of you be interested in creating a PR for this? :)

@danielbichuetti
Copy link
Contributor

danielbichuetti commented Jan 25, 2023

@bogdankostic I can proceed. Do you want the same pattern adopted in the PromptNode, right?! I'll commit next hours.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants