Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for llamaIndex in evaluation #1619

Merged
merged 4 commits into from
Nov 6, 2024

Conversation

suekou
Copy link
Contributor

@suekou suekou commented Nov 4, 2024

Added type checks for llamaIndex LLMs and embeddings in the evaluate function.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Nov 4, 2024
@shahules786 shahules786 requested a review from jjmachan November 4, 2024 13:50
Copy link
Member

@jjmachan jjmachan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot for sending a PR 🙂 - just once change though and we'll merge it in

pyproject.toml Outdated
@@ -8,6 +8,7 @@ dependencies = [
"langchain-core",
"langchain-community",
"langchain_openai",
"llama_index",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llamaindex is actually not part of the core dependency. It's best if we keep it optional

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you move it to all that would be helpful, or maybe even a new group for llamaindex?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I've updated the PR to make llama_index optional by moving it to the [all] group.

@suekou suekou requested a review from jjmachan November 4, 2024 22:45
Copy link
Member

@jjmachan jjmachan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @suekou 🙂 ❤️
merging this in

@jjmachan jjmachan merged commit 6ff35f7 into explodinggradients:main Nov 6, 2024
15 checks passed
@jjmachan
Copy link
Member

jjmachan commented Nov 6, 2024

btw would love to get your feedback on this docs too if you get the chance https://docs.ragas.io/en/latest/howtos/integrations/_llamaindex/

@suekou
Copy link
Contributor Author

suekou commented Nov 6, 2024

@jjmachan I found this to be very clear and understandable!
I noticed the URL on the page was outdated, so I submitted a PR to update it! PR

@suekou
Copy link
Contributor Author

suekou commented Nov 6, 2024

Looking at it again, I think it might also be worth mentioning the differences between evaluate() from from ragas import evaluate and from ragas.integrations.llama_index import evaluate. For instance, using evaluate from ragas allows for more detailed configuration options, such as in_ci and token_usage_parser, which enable a more customized evaluation.

@jjmachan
Copy link
Member

jjmachan commented Nov 7, 2024

hey @suekou thats a good point but right now evaluate in ragas.integrations.llamaindex has the same features as the top level evaluate right?

the idea was to keep both of the feature compatible since behind the scene all we are doing is running the query_engine for you

ps: Thanks a lot for the other PR too 🙂 ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants