-
Notifications
You must be signed in to change notification settings - Fork 613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Soft Weighted Kappa Loss #762
Conversation
Hi @seanpmorgan, I have reformatted the codes, can you help me re-run the tests again? |
Hi @seanpmorgan , sorry for disturbing you again. Two lines of the codes exceeded the maximum line length(79), however, they were within comments and hadn't been automatically formatted. So I shorten them manually. Could you please trigger testing run again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! I do suppose this shares lots of similar codes with https://github.com/tensorflow/addons/blob/master/tensorflow_addons/metrics/cohens_kappa.py
Could we somehow create some utility functions to reduce the duplicated codes?
Also, please open an issue before filing a PR next time so that the community could track/discuss in advance! Thank you!
Definitely okay! Also do not forget to update README.md under the directory. EDIT: |
No, loss calculation is quite different from the metric calculation. Since loss is based on the probabilities while metric use |
Got it. Could you update the following files?
Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update! Remember to put kappa_loss.py
here so that bazel could build this module.
All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the ℹ️ Googlers: Go here for more info. |
Co-Authored-By: Gabriel de Marmiesse <[email protected]>
Co-Authored-By: Gabriel de Marmiesse <[email protected]>
Hi @gabrieldemarmiesse , I refined the code according to your review, and all the tests passed. Thanks for your help. |
Great! I'll take a look when I have some free time |
@gabrieldemarmiesse thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I took a quick look, it looks great, just a few changes I think we need to do.
Hi @gabrieldemarmiesse, I have made changes according to your code review, and all tests passed. Could you please have a look when you have time? thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the changes. This pull request is great and could technically be merged now, but I'm very picky with the tests and readability since we're a small team of maintainers and we have to make sure this will give us very little maintenance in the future. Here are some ideas of improvements to make this PR perfect, if you need some clarifications/help, don't hesitate to ask
Hi @gabrieldemarmiesse , I changed the codes according to your review. All tests passed except one
https://github.com/tensorflow/addons/pull/762/checks?check_run_id=582034839 |
Thank you for the changes. Yeah, the error is unrelated to your pull request, I don't know why this happens. Don't worry about it, I'll review today. We don't need to fix the unrelated error to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks a lot for the pull request, we'll merge once the tests are passing.
Thanks again for the pull request! |
Hi @gabrieldemarmiesse , thanks a lot for your help. Learnt a lot of code conventions from you. |
* add weighted kappa loss * add unit tests * change some docs * change python files format * shorten some lines * rename and update README and BUILD * resolve conversations * resolve converstions * remove escape * reformat tensorflow_addons/losses/kappa_loss* with black * reformat code * reformat code * reformat code with black * Update tensorflow_addons/losses/kappa_loss.py Co-Authored-By: Gabriel de Marmiesse <[email protected]> * [KappaLoss] change according to review * Update tensorflow_addons/losses/kappa_loss.py Co-Authored-By: Gabriel de Marmiesse <[email protected]> * Update tensorflow_addons/losses/kappa_loss.py Co-Authored-By: Gabriel de Marmiesse <[email protected]> * [KappaLoss] change accroding to code review * [KappaLoss] change code format * [SoftKappaLoss] mv kappa_loss_test.py to losses/tests * Update .github/CODEOWNERS Co-Authored-By: Gabriel de Marmiesse <[email protected]> * [SoftKappaLoss] refine codes according to code review * [SoftKappaLoss] reformat codes * [SoftKappaLoss] fix np_deep not defined * [SoftKappaLoss] fix tests problem * [SoftKappaLoss] unnecessary change to tigger CI * Default value for the seed is not needed. Co-authored-by: gabrieldemarmiesse <[email protected]>
Quadratic Weighted Kappa metric is widely used in both real-world and Kaggle competitions. Some people optimize this metric by using cross-entropy loss or MSE followed by thresholds tuning.
However, it's better to find a differentiable(soft) kappa loss function which can optimize the Quadratic Weighted Kappa metric directly. I implemented a soft version which proposed in this paper: https://www.sciencedirect.com/science/article/abs/pii/S0167865517301666
I have used it in the Data Science Bowl 2019 Kaggle Competition, the notebook talking about this is here: https://www.kaggle.com/wuwenmin/dnn-and-effective-soft-quadratic-kappa-loss
And also this medium https://medium.com/@wenmin_wu/directly-optimize-quadratic-weighted-kappa-with-tf2-0-482cf0318d74 posted by me about how to implement the weighted kappa with TF2.0.
I think it's better to put this in tensorflow addons, so more people can directly call it from addon packages.