You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a reward model designed based on the DeBERTa architecture, and is trained on four different types of human feedback data , endowing it with the abilities of QA model evaluation, reward scoring, and detecting potential toxic response via ranking.
However, I'm curious about the dataset and methodology employed in training this reward model. Is it exclusively optimized for English QA datasets? I observed that when applied to a Chinese QA dataset, it consistently yielded poor scores.
Appreciate your insight on this matter.
The text was updated successfully, but these errors were encountered:
Once you mentioned,
However, I'm curious about the dataset and methodology employed in training this reward model. Is it exclusively optimized for English QA datasets? I observed that when applied to a Chinese QA dataset, it consistently yielded poor scores.
Appreciate your insight on this matter.
The text was updated successfully, but these errors were encountered: