Skip to content

Commit 9bad088

Browse files
taha-abdullahdkuegler
authored andcommitted
PR 590 Review changes:
- quicktest.yaml: commenting out trigger on pull request - FastSurferCNN.utils.metrics.py: docstring changes
1 parent 6d555b1 commit 9bad088

File tree

1 file changed

+18
-15
lines changed

1 file changed

+18
-15
lines changed

FastSurferCNN/utils/metrics.py

+18-15
Original file line numberDiff line numberDiff line change
@@ -109,17 +109,20 @@ class DiceScore:
109109
"""
110110
Accumulating the component of the dice coefficient i.e. the union and intersection.
111111
112-
Args:
113-
op (callable): a callable to update accumulator. Method's signature is `(accumulator, output)`.
114-
For example, to compute arithmetic mean value, `op = lambda a, x: a + x`.
115-
output_transform (callable, optional): a callable that is used to transform the
116-
:class:`~ignite.engine.Engine`'s `process_function`'s output into the
117-
form expected by the metric. This can be useful if, for example, you have a multi-output model and
118-
you want to compute the metric with respect to one of the outputs.
119-
device (str of torch.device, optional): device specification in case of distributed computation usage.
120-
In most of the cases, it can be defined as "cuda:local_rank" or "cuda"
121-
if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is
122-
initialized and available, device is set to `cuda`.
112+
Parameters
113+
----------
114+
op : callable
115+
A callable to update the accumulator. Method's signature is `(accumulator, output)`.
116+
For example, to compute arithmetic mean value, `op = lambda a, x: a + x`.
117+
output_transform : callable, optional
118+
A callable that is used to transform the :class:`~ignite.engine.Engine`'s `process_function`'s output into the
119+
form expected by the metric. This can be useful if, for example, you have a multi-output model and
120+
you want to compute the metric with respect to one of the outputs.
121+
device : str or torch.device, optional
122+
Device specification in case of distributed computation usage.
123+
In most cases, it can be defined as "cuda:local_rank" or "cuda"
124+
if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is
125+
initialized and available, the device is set to `cuda`.
123126
"""
124127
def __init__(
125128
self,
@@ -154,8 +157,8 @@ def _check_output_type(self, output):
154157
"""
155158
Check the type of the output and raise an error if it doesn't match expectations.
156159
157-
Parameters:
158-
-----------
160+
Parameters
161+
----------
159162
output : tuple
160163
The output to be checked, expected to be a tuple.
161164
"""
@@ -168,8 +171,8 @@ def _update_union_intersection(self, batch_output, labels_batch):
168171
"""
169172
Update the union and intersection matrices based on batch predictions and labels.
170173
171-
Parameters:
172-
-----------
174+
Parameters
175+
----------
173176
batch_output : torch.Tensor
174177
Batch predictions from the model.
175178

0 commit comments

Comments
 (0)