Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculating false positives and other metrics #4133

Closed
FleetingA opened this issue Jul 24, 2021 · 3 comments · Fixed by #5727
Closed

Calculating false positives and other metrics #4133

FleetingA opened this issue Jul 24, 2021 · 3 comments · Fixed by #5727
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@FleetingA
Copy link

What is the best way to calculate metrics (such as false positives and negatives) on a complete dataset after training is complete?

I want to calculate how many of the objects of interest in my images have been identified correctly (I have 4700 training images), but am a bit unsure how to derive these metrics.

Thanks in advance!

@FleetingA FleetingA added the question Further information is requested label Jul 24, 2021
@glenn-jocher
Copy link
Member

@FleetingA you can compute your TP and FP easily from your P and R, i.e. 0.9 recall means that 90% of your objects were correctly identified, and 50% P means that for every TP there was also an FP.

See https://en.wikipedia.org/wiki/Precision_and_recall

@github-actions
Copy link
Contributor

github-actions bot commented Aug 24, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Aug 24, 2021
@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 20, 2021

@FleetingA good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants