LangFair: Assess your LLM use case for bias and fairness risks #29115
dylanbouchard
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
LangFair is an open-source Python package that equips LLM practitioners with tools to evaluate bias and fairness risks relevant to their specific use cases. The package offers functionality to easily generate evaluation datasets, comprised of LLM responses to use-case-specific prompts, and subsequently calculate applicable metrics for the practitioner's use case. To guide in metric selection, LangFair offers an actionable decision framework.
To generate evaluation datasets, LangFair is built on top of LangChain. For more information on LangFair, please see the below resources:
Beta Was this translation helpful? Give feedback.
All reactions