-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mean average precision metric for object detection #467
Conversation
Codecov Report
@@ Coverage Diff @@
## master #467 +/- ##
======================================
- Coverage 95% 95% -0%
======================================
Files 148 150 +2
Lines 5142 5307 +165
======================================
+ Hits 4897 5050 +153
- Misses 245 257 +12 |
for more information, see https://pre-commit.ci
Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Nicki Skafte <[email protected]>
for more information, see https://pre-commit.ci
Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Nicki Skafte <[email protected]>
Any why is this massive test failer for PT 1.4?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR looks great to me.
Thank you everybody for your support 🙂 |
the pleasure is on our side, Thx for your work and patience... 🐰 |
Thanks everybody for their input on getting this metric implemented 😄 |
Hello, One question - do we need to exclude class 0 which is the background class when updating / handling the metrics? |
Mean Average Precision (mAP) for object detection
New metric for object detection.
What does this PR do?
This PR introduces the commonly used mean average precision metric for object detection.
As there are multiple different implementations, and even different calculations, the new metric wraps the
pycocotools
evaluation, which is used as a standard for several academic and open-source projects for evaluation.This metric is actively discussed in issue, resolves #53
TODO
pycocoeval
can handle tensors to avoid.cpu()
calls (it cannot)MAPMetricResults
to have all evaluation results in thereget_coco_target
andget_coco_preds
methods)torchmetrics
formatNote
This is my first contribution to the PyTorchLightning project. Please review the code carefully and give me hints on how to improve and match your guidelines.