-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
f1 metric - micro/macro/weighted #284
Conversation
Please let me know what will be my next steps. To do as far as I know will be to add a test for the same. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SSaishruthi Next steps: __init__.py
, README.md
and tests.
Thanks @Squadrick |
@Squadrick |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SSaishruthi The code looks good so far, just a few small changes. Also looking you need to add the test cases, update __init__.py
, README.md
and BUILD
.
@Squadrick |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the module to be called F1Score
to accept micro
as well. Like @WindQAQ mentioned here, it's a much cleaner API and reduces the amount of net code.
I have updated code to calculate F1 scores under one class.
Colab: https://colab.research.google.com/drive/1qSq0SsYkPqjdKUgM1RM4kKM67X75ocFj#scrollTo=hZSxoW4eYqYW I ran code format to check scripts. I encountered an installation error in |
@SSaishruthi Can you pull from |
I have updated the branch and resolved conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments about refactoring the test cases, after which we should be good to merge.
tensorflow_addons/metrics/f1_test.py
Outdated
def initialize_vars(self): | ||
f1_obj = F1Score(num_classes=3, average='micro') | ||
f1_obj1 = F1Score(num_classes=3, average='macro') | ||
f1_obj2 = F1Score(num_classes=3, average='weighted') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Rename f1_obj
-> f1_micro
, f1_obj1
-> f1_macro
, f1_obj2
-> f1_weighted
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do the renaming through out the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can merge once as soon as this is done.
Inputs to One solution I thought I can try is to have two sets of input, pass both (actual1, actual2, pred1, pred2) and use only multiclass(actual1, pred1) for Thanks |
@SSaishruthi Oh, my bad. In that case, leave it as is. |
@Squadrick updated the scripts. Can you please trigger the test again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SSaishruthi LGTM! Thanks for the contribution.
This PR contains a script for adding f1-macro metric.
Colab notebook: https://colab.research.google.com/drive/1LJ1yb8cUgisNP3ETfMLRYPQEyJivzDv-