-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specificity if TN + FN = 0 #2487
Comments
I just saw that this might be just a documentation issue. The classes torchmetrics/src/torchmetrics/classification/specificity.py Lines 31 to 39 in c1f8334
Also, the implementation seems to follow this definition. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The specificity is defined as$\text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}$ . As far as I see, in case $\text{FP} = 0$ , the specificity should be 1.
The docs say that in case$\text{FP} + \text{TP} = 0$ , the metric is not defined (see reference below). To put this into words, this means that if the dataset does not contain any positive samples, the metric is not defined and it will return 0.
Is this really intended? It might be a weird edge case. But from the metric definition, I see no argument why the metric should not be defined in case there are no positive samples in the data. In case$\text{FP} + \text{TP} = 0$ , I think the metric should return 1, not 0.
torchmetrics
version: 1.3.2torchmetrics/src/torchmetrics/classification/specificity.py
Lines 450 to 459 in c1f8334
The text was updated successfully, but these errors were encountered: