Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you please provide more details in training DeBERTa as a reward model? #2

Open
4daJKong opened this issue Dec 11, 2023 · 0 comments

Comments

@4daJKong
Copy link

Once you mentioned,

This is a reward model designed based on the DeBERTa architecture, and is trained on four different types of human feedback data , endowing it with the abilities of QA model evaluation, reward scoring, and detecting potential toxic response via ranking.

However, I'm curious about the dataset and methodology employed in training this reward model. Is it exclusively optimized for English QA datasets? I observed that when applied to a Chinese QA dataset, it consistently yielded poor scores.
Appreciate your insight on this matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant