-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add retry with exponential back-off to PromptNode's OpenAI models #3886
Conversation
I like the concept a lot but I think the backoff parameters should be controllable by the users. Right now 10 and 5 are really hardcoded and quite invisible... They would be hard to change even with reflection! Can we find a way (any, really) to let the user control such values? I'd be happy with a hacky-ish solution too (env vars, module constants, ...) |
In an ideal world, yes, but IMHO the cost-benefit of providing such a fine detail of control might be an overkill. IIRC, Bogdan and I worked on this feature; users have used it in other components (EmbeddingEncoder) for some time now, and we have yet to receive such a request. Let's hear the opinion of others and proceed forward. cc @bogdankostic @masci |
For context: we had a similar discussion in #3398. |
I remember now, thanks! Maybe env variables can be used? I'll check tomorrow |
Ok, I stand corrected. We can use env vars with defaults - and we should. See 962281a |
What would be the best names for these two variables? Let's use them not only for OPENAI remote models but also for all remote model invocations. If we go down naming env vars for backoff and max retries for every model, we'll have many of these env vars. |
@ZanSara @bogdankostic @sjrl @masci I came up with these names. Please feel free to suggest others. I liked this approach because all env vars are imported in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Looks like a good solution to me 😊
Just as a follow up, do we set these env vars to 0 in the CI? I think that might help speed the tests up a bit
Cool, I would leave them either as is or lower |
Yeah, in fact let's set |
Related Issues
Proposed Changes:
Added a well-known (to us at least) and previously used decorator
@retry_with_exponential_backoff(backoff_in_seconds=10, max_retries=5)
that handles the OpenAIRateLimitError with retry and exponential backoffHow did you test it?
Manual tests by forcing OpenAIRateLimitError
Notes for the reviewer
We used this decorator in other OpenAI APIs, I checked it manually and didn't add new tests. We'll see how this works in our CI ;-)
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.