Skip to content

Commit

Permalink
Merge branch 'f_beta_update' of https://github.com/PyTorchLightning/m…
Browse files Browse the repository at this point in the history
…etrics into f_beta_update
  • Loading branch information
SkafteNicki committed Mar 24, 2021
2 parents 6eb4dcc + 5e7c2d3 commit aa0f001
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 30 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/docs-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ jobs:
# First run the same pipeline as Read-The-Docs
cd docs
make clean
make html --debug --jobs 2 SPHINXOPTS="-W"
make html --debug --jobs 2 SPHINXOPTS="-W --keep-going"
- name: Upload built docs
uses: actions/upload-artifact@v2
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ test: clean env

docs: clean
pip install --quiet -r docs/requirements.txt
python -m sphinx -b html -W docs/source docs/build
python -m sphinx -b html -W --keep-going docs/source docs/build

env:
pip install -r requirements.txt
Expand Down
50 changes: 22 additions & 28 deletions torchmetrics/functional/classification/f_beta.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def fbeta(
Computes f_beta metric.
.. math::
F_\beta = (1 + \beta^2) * \frac{\text{precision} * \text{recall}}
F_{\beta} = (1 + \beta^2) * \frac{\text{precision} * \text{recall}}
{(\beta^2 * \text{precision}) + \text{recall}}
Works with binary, multiclass, and multilabel data.
Expand All @@ -108,15 +108,15 @@ def fbeta(
average:
Defines the reduction that is applied. Should be one of the following:
- ``'micro'`` [default]: Calculate the metric globally, accross all samples and classes.
- ``'macro'``: Calculate the metric for each class separately, and average the
metrics accross classes (with equal weights for each class).
- ``'weighted'``: Calculate the metric for each class separately, and average the
metrics accross classes, weighting each class by its support (``tp + fn``).
- ``'none'`` or ``None``: Calculate the metric for each class separately, and return
the metric for every class.
- ``'samples'``: Calculate the metric for each sample, and average the metrics
across samples (with equal weights for each sample).
- ``'micro'`` [default]: Calculate the metric globally, accross all samples and classes.
- ``'macro'``: Calculate the metric for each class separately, and average the
metrics accross classes (with equal weights for each class).
- ``'weighted'``: Calculate the metric for each class separately, and average the
metrics accross classes, weighting each class by its support (``tp + fn``).
- ``'none'`` or ``None``: Calculate the metric for each class separately, and return
the metric for every class.
- ``'samples'``: Calculate the metric for each sample, and average the metrics
across samples (with equal weights for each sample).
.. note:: What is considered a sample in the multi-dimensional multi-class case
depends on the value of ``mdmc_average``.
Expand All @@ -125,38 +125,32 @@ def fbeta(
Defines how averaging is done for multi-dimensional multi-class inputs (on top of the
``average`` parameter). Should be one of the following:
- ``None`` [default]: Should be left unchanged if your data is not multi-dimensional
multi-class.
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`references/modules:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`references/modules:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
- ``None`` [default]: Should be left unchanged if your data is not multi-dimensional
multi-class.
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`references/modules:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`references/modules:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
ignore_index:
Integer specifying a target class to ignore. If given, this class index does not contribute
to the returned score, regardless of reduction method. If an index is ignored, and ``average=None``
or ``'none'``, the score for the ignored class will be returned as ``nan``.
num_classes:
Number of classes. Necessary for ``'macro'``, ``'weighted'`` and ``None`` average methods.
threshold:
Threshold probability value for transforming probability predictions to binary
(0,1) predictions, in the case of binary or multi-label inputs.
top_k:
Number of highest probability entries for each sample to convert to 1s - relevant
only for inputs with probability predictions. If this parameter is set for multi-label
inputs, it will take precedence over ``threshold``. For (multi-dim) multi-class inputs,
this parameter defaults to 1.
Should be left unset (``None``) for inputs with label predictions.
this parameter defaults to 1. Should be left unset (``None``) for inputs with label predictions.
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
Expand Down

0 comments on commit aa0f001

Please sign in to comment.