-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
F1 and Precision/Recall value not consistent. #58
Comments
So currently we always interpret the precision_recall(y_pred, y_true, class_reduction='none')
(tensor([0.5000, 0.0000]), tensor([1., 0.])) |
But we choose So these incosistent values makes me so confused. |
precision_recall(y_pred, y_true, class_reduction='none')
(tensor([0.5000, 0.0000]), tensor([1., 0.])) @Whisht here the first output is the precision for each class meaning 0.5 for class 0 and 0.0 for class 1 and the second output is the recall which is 1 for class 0 and 0 for class 1. That seems correct to me. |
In 2-class classification, f1 returns the same value with accuracy. This is not good. To Reproduce target = torch.tensor([1, 1, 0, 0])
preds = torch.tensor([0, 1, 0, 0])
ConfusionMatrix(num_classes=2)(preds, target)
# -> tensor([[2., 0.],
# [1., 1.]])
F1(num_classes=2)(preds, target)
# -> tensor(0.7500) In above situation, f1 should be 0.8 or 0.666. pytorch-lightning 1.2.0 |
@taketakeseijin if you want the score for the positive class (which is often the case for binary classification) you can do: F1(num_classes=1)(preds, target)
# tensor(0.6667) for the negative (0 class) you can do: F1(num_classes=2, average=None)(preds, target)[0] # returns both scores, pick the first
# tensor(0.8000) setting |
Hi! thanks for your contribution!, great first issue! |
🐛 Bug
The return of f1 and precision is wrong.
To Reproduce
As mentioned above, If we take
0
as positive class, thentp, fp, tn, fn = [8, 8, 0, 0]
, andprecision
will be0.5
,recall
should be1
. But theprecision()
method get a0.5
output.Expected behavior
The value could be consistent. And a give parameter that could make any class as positive (like sklearn) would be easier to usr.
Environment
*python = 3.8.5
*pytorch-lightning=1.1.6
*pytorch=1.7
The text was updated successfully, but these errors were encountered: