-
Notifications
You must be signed in to change notification settings - Fork 2.3k
FBetaMeasure metric with one value per key #5638
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks quite good. Can we give it a more descriptive name than FBetaMeasure2
? That's the only real concern I have.
all_metrics[f"{c}-recall"] = r | ||
all_metrics[f"{c}-fscore"] = f | ||
|
||
if "macro" in self._averages: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since these calculations don't take any time, can you just do all of them every time? It doesn't hurt anything to have all the averages all the time, does it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Renamed to FBetaVerboseMeasure and removed the average option (always return all three averages).
Thank you @dirkgr for reviewing this PR. I made the suggested changes. |
It seems to be failing tests. I put an updated version at #5651 that fixes some things, but it looks like the actual test for the new metric is failing. |
Thank you, @dirkgr , for pointing this issue. There were two tests missing adaptation from the older version. They have now passed in my tests. I have also incorporated your edits from #5651 . |
Thank you for sticking with it! |
Thank you for your support! |
Fixes #5637 .
Changes proposed in this pull request:
Before submitting
section of the
CONTRIBUTING
docs.Writing docstrings section of the
CONTRIBUTING
docs.After submitting
codecov/patch
reports high test coverage (at least 90%).You can find this under the "Actions" tab of the pull request once the other checks have finished.