Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metric Accuracy doesn't work as previous version #233

Closed
byte-sourcerer opened this issue May 7, 2021 · 2 comments
Closed

Metric Accuracy doesn't work as previous version #233

byte-sourcerer opened this issue May 7, 2021 · 2 comments
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed

Comments

@byte-sourcerer
Copy link

versions

pytorch 1.8.1
pytorch_lightning 1.2.10

reproduce

import torch
import pytorch_lightning as pl
from pytorch_lightning.metrics import Accuracy

acc = Accuracy()
preds = torch.rand(4,4)
target = torch.tensor([0, 1, 2, 3]).long()
print(acc(preds, target))

The code above produces errors like:

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/metrics/classification/helpers.py in _check_classification_inputs(preds, target, threshold, num_classes, is_multiclass, top_k)
    302     if case in (DataType.MULTICLASS, DataType.MULTIDIM_MULTICLASS) and preds.is_floating_point():
    303         if not torch.isclose(preds.sum(dim=1), torch.ones_like(preds.sum(dim=1))).all():
--> 304             raise ValueError("Probabilities in `preds` must sum up to 1 accross the `C` dimension.")
    305 
    306     # Check consistency with the `C` dimension in case of multi-class data

ValueError: Probabilities in `preds` must sum up to 1 accross the `C` dimension.

I believe🤔 that in the previous version, the above code works well, and I don't have to normalize C dimension of preds to 1.

Workaround

import torch
import pytorch_lightning as pl
from pytorch_lightning.metrics import Accuracy

acc = Accuracy()
preds = torch.rand(4,4)
target = torch.tensor([0, 1, 2, 3]).long()
print(acc(preds.argmax(dim=1), target))
@carmocca
Copy link
Contributor

carmocca commented May 7, 2021

Transferring the issue to torchmetrics since we rely on their implementation

@carmocca carmocca transferred this issue from Lightning-AI/pytorch-lightning May 7, 2021
@carmocca carmocca added bug / fix Something isn't working help wanted Extra attention is needed labels May 7, 2021
@SkafteNicki
Copy link
Member

It is correct that we at some point in the past supported raw model input (logits) for various classification metrics.
We then started to only to support probability input when the metrics gained a few extra features (like top_k calculations).
That said, we have acknowledged that we should support logit input and have another issue open already #74 and already working on a PR #200.
I am going to close this, feel free to add additional comments in #74 .

@Lightning-AI Lightning-AI deleted a comment from github-actions bot May 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants