-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
classification metrics #4043
Merged
Merged
classification metrics #4043
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
dd9c584
docs + precision + recall + f_beta + refactor
ananyahjha93 1ef3ef2
rebase
ananyahjha93 f7c5c2d
fixes
ananyahjha93 f810486
added missing file
ananyahjha93 3ca8123
docs
ananyahjha93 344a518
docs
ananyahjha93 f2f7ec9
extra import
ananyahjha93 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,3 @@ | ||
from pytorch_lightning.metrics.classification.accuracy import Accuracy | ||
from pytorch_lightning.metrics.classification.precision_recall import Precision, Recall | ||
from pytorch_lightning.metrics.classification.f_beta import Fbeta |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
import math | ||
import functools | ||
from abc import ABC, abstractmethod | ||
from typing import Any, Callable, Optional, Union | ||
from collections.abc import Mapping, Sequence | ||
from collections import namedtuple | ||
|
||
import torch | ||
from torch import nn | ||
from pytorch_lightning.metrics.metric import Metric | ||
from pytorch_lightning.metrics.classification.precision_recall import _input_format | ||
from pytorch_lightning.metrics.utils import METRIC_EPS | ||
|
||
|
||
class Fbeta(Metric): | ||
""" | ||
Computes f_beta metric. | ||
|
||
Works with binary, multiclass, and multilabel data. | ||
Accepts logits from a model output or integer class values in prediction. | ||
Works with multi-dimensional preds and target. | ||
|
||
Forward accepts | ||
|
||
- ``preds`` (float or long tensor): ``(N, ...)`` or ``(N, C, ...)`` where C is the number of classes | ||
- ``target`` (long tensor): ``(N, ...)`` | ||
|
||
If preds and target are the same shape and preds is a float tensor, we use the ``self.threshold`` argument. | ||
This is the case for binary and multi-label logits. | ||
|
||
If preds has an extra dimension as in the case of multi-class scores we perform an argmax on ``dim=1``. | ||
|
||
Args: | ||
num_classes: Number of classes in the dataset. | ||
beta: Beta coefficient in the F measure. | ||
threshold: | ||
Threshold value for binary or multi-label logits. default: 0.5 | ||
|
||
average: | ||
* `'micro'` computes metric globally | ||
* `'macro'` computes metric for each class and then takes the mean | ||
|
||
multilabel: If predictions are from multilabel classification. | ||
compute_on_step: | ||
Forward only calls ``update()`` and return None if this is set to False. default: True | ||
dist_sync_on_step: | ||
Synchronize metric state across processes at each ``forward()`` | ||
before returning the value at the step. default: False | ||
process_group: | ||
Specify the process group on which synchronization is called. default: None (which selects the entire world) | ||
|
||
Example: | ||
|
||
>>> from pytorch_lightning.metrics import Fbeta | ||
>>> target = torch.tensor([0, 1, 2, 0, 1, 2]) | ||
>>> preds = torch.tensor([0, 2, 1, 0, 0, 1]) | ||
>>> f_beta = Fbeta(num_classes=3, beta=0.5) | ||
>>> f_beta(preds, target) | ||
tensor(0.3333) | ||
|
||
""" | ||
def __init__( | ||
self, | ||
num_classes: int = 1, | ||
beta: float = 1., | ||
threshold: float = 0.5, | ||
average: str = 'micro', | ||
multilabel: bool = False, | ||
compute_on_step: bool = True, | ||
dist_sync_on_step: bool = False, | ||
process_group: Optional[Any] = None, | ||
): | ||
super().__init__( | ||
compute_on_step=compute_on_step, | ||
dist_sync_on_step=dist_sync_on_step, | ||
process_group=process_group, | ||
) | ||
|
||
self.num_classes = num_classes | ||
self.beta = beta | ||
self.threshold = threshold | ||
self.average = average | ||
self.multilabel = multilabel | ||
|
||
assert self.average in ('micro', 'macro'), \ | ||
"average passed to the function must be either `micro` or `macro`" | ||
|
||
self.add_state("true_positives", default=torch.zeros(num_classes), dist_reduce_fx="sum") | ||
self.add_state("predicted_positives", default=torch.zeros(num_classes), dist_reduce_fx="sum") | ||
self.add_state("actual_positives", default=torch.zeros(num_classes), dist_reduce_fx="sum") | ||
|
||
def update(self, preds: torch.Tensor, target: torch.Tensor): | ||
""" | ||
Update state with predictions and targets. | ||
|
||
Args: | ||
preds: Predictions from model | ||
target: Ground truth values | ||
""" | ||
preds, target = _input_format(self.num_classes, preds, target, self.threshold, self.multilabel) | ||
|
||
self.true_positives += torch.sum(preds * target, dim=1) | ||
self.predicted_positives += torch.sum(preds, dim=1) | ||
self.actual_positives += torch.sum(target, dim=1) | ||
|
||
def compute(self): | ||
""" | ||
Computes accuracy over state. | ||
""" | ||
if self.average == 'micro': | ||
precision = self.true_positives.sum().float() / (self.predicted_positives.sum() + METRIC_EPS) | ||
recall = self.true_positives.sum().float() / (self.actual_positives.sum() + METRIC_EPS) | ||
|
||
return (1 + self.beta ** 2) * (precision * recall) / (self.beta ** 2 * precision + recall) | ||
elif self.average == 'macro': | ||
precision = self.true_positives.float() / (self.predicted_positives + METRIC_EPS) | ||
recall = self.true_positives.float() / (self.actual_positives + METRIC_EPS) | ||
|
||
return ((1 + self.beta ** 2) * (precision * recall) / (self.beta ** 2 * precision + recall)).mean() |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.