Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the finetuning script's loss and metric config #176

Merged
merged 2 commits into from
May 11, 2022

Conversation

chenmoneygithub
Copy link
Contributor

  1. Model output should be probability instead of logits.
  2. Metric should be SparseCategoricalAccuracy instead of accuracy.

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! One comment

num_classes,
name="logits",
name="probability",
activation="softmax",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that the pre training script is doing logit outputs, I think we might want to stay consistent.
https://github.com/keras-team/keras-nlp/blob/master/examples/bert/run_pretraining.py#L119

So stick with logits here, and pass from_logits=True to the loss below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg!

@chenmoneygithub chenmoneygithub merged commit 1844b46 into keras-team:master May 11, 2022
@chenmoneygithub chenmoneygithub deleted the fix-finetune branch November 30, 2022 21:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants