We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently given corpus like below causes problems as they are not expecting an empty list:
import jury p = [["a b c"], []] r = [["a b d e f"], ["a g h i"]] scorer = jury.Jury() scores = scorer(predictions=p, references=r)
the code above throws an exception when encounters an empty list:
Traceback (most recent call last): File "/home/devrimcavusoglu/lab/gh/jury/jury/core.py", line 202, in <module> scores = scorer(predictions=p, references=r) File "/home/devrimcavusoglu/lab/gh/jury/jury/core.py", line 79, in __call__ score = self._compute_single_score(inputs) File "/home/devrimcavusoglu/lab/gh/jury/jury/core.py", line 148, in _compute_single_score score = metric.compute(predictions=predictions, references=references, reduce_fn=reduce_fn) File "/home/devrimcavusoglu/lab/gh/jury/venv/lib/python3.8/site-packages/datasets/metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/devrimcavusoglu/lab/gh/jury/jury/metrics/_core/base.py", line 325, in _compute result = self.evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **eval_params) File "/home/devrimcavusoglu/lab/gh/jury/jury/metrics/bleu/bleu_for_language_generation.py", line 262, in evaluate return super().evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **kwargs) File "/home/devrimcavusoglu/lab/gh/jury/jury/metrics/_core/base.py", line 279, in evaluate return eval_fn(predictions=predictions, references=references, **kwargs) File "/home/devrimcavusoglu/lab/gh/jury/jury/metrics/bleu/bleu_for_language_generation.py", line 216, in _compute_multi_pred_multi_ref adjusted_prediction_length += get_token_lengths(preds, reduce_fn=max) File "/home/devrimcavusoglu/lab/gh/jury/jury/metrics/_core/utils.py", line 58, in get_token_lengths return int(reduce_fn(token_lengths)) ValueError: max() arg is an empty sequence Process finished with exit code 1
The text was updated successfully, but these errors were encountered:
devrimcavusoglu
Successfully merging a pull request may close this issue.
Currently given corpus like below causes problems as they are not expecting an empty list:
the code above throws an exception when encounters an empty list:
The text was updated successfully, but these errors were encountered: