-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add support for llamaIndex in evaluation #1619
feat: Add support for llamaIndex in evaluation #1619
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks a lot for sending a PR 🙂 - just once change though and we'll merge it in
pyproject.toml
Outdated
@@ -8,6 +8,7 @@ dependencies = [ | |||
"langchain-core", | |||
"langchain-community", | |||
"langchain_openai", | |||
"llama_index", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llamaindex is actually not part of the core dependency. It's best if we keep it optional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you move it to all
that would be helpful, or maybe even a new group for llamaindex?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I've updated the PR to make llama_index optional by moving it to the [all] group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @suekou 🙂 ❤️
merging this in
btw would love to get your feedback on this docs too if you get the chance https://docs.ragas.io/en/latest/howtos/integrations/_llamaindex/ |
Looking at it again, I think it might also be worth mentioning the differences between evaluate() from |
hey @suekou thats a good point but right now the idea was to keep both of the feature compatible since behind the scene all we are doing is running the query_engine for you ps: Thanks a lot for the other PR too 🙂 ❤️ |
Added type checks for llamaIndex LLMs and embeddings in the evaluate function.