Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specificity if TN + FN = 0 #2487

Closed
lukazso opened this issue Apr 3, 2024 · 1 comment · Fixed by #2491
Closed

Specificity if TN + FN = 0 #2487

lukazso opened this issue Apr 3, 2024 · 1 comment · Fixed by #2491

Comments

@lukazso
Copy link
Contributor

lukazso commented Apr 3, 2024

The specificity is defined as $\text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}$. As far as I see, in case $\text{FP} = 0$, the specificity should be 1.

The docs say that in case $\text{FP} + \text{TP} = 0$, the metric is not defined (see reference below). To put this into words, this means that if the dataset does not contain any positive samples, the metric is not defined and it will return 0.

Is this really intended? It might be a weird edge case. But from the metric definition, I see no argument why the metric should not be defined in case there are no positive samples in the data. In case $\text{FP} + \text{TP} = 0$, I think the metric should return 1, not 0.

torchmetrics version: 1.3.2

class Specificity(_ClassificationTaskWrapper):
r"""Compute `Specificity`_.
.. math:: \text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}
Where :math:`\text{TN}` and :math:`\text{FP}` represent the number of true negatives and false positives
respectively. The metric is only proper defined when :math:`\text{TP} + \text{FP} \neq 0`. If this case is
encountered for any class/label, the metric for that class/label will be set to 0 and the overall metric may
therefore be affected in turn.

@lukazso
Copy link
Contributor Author

lukazso commented Apr 3, 2024

I just saw that this might be just a documentation issue. The classes BinarySpecificity, MulticlassSpecificity, and Multilabelspecificity all have the correct description:

class BinarySpecificity(BinaryStatScores):
r"""Compute `Specificity`_ for binary tasks.
.. math:: \text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}
Where :math:`\text{TN}` and :math:`\text{FP}` represent the number of true negatives and false positives
respectively. The metric is only proper defined when :math:`\text{TN} + \text{FP} \neq 0`. If this case is
encountered a score of 0 is returned.

Also, the implementation seems to follow this definition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants
@lukazso and others