autointent.metrics.scoring_neg_coverage#

autointent.metrics.scoring_neg_coverage(labels, scores)#

Supports multilabel classification.

Evaluates how far we need, on average, to go down the list of labels in order to cover all the proper labels of the instance.

  • The ideal value is 1

  • The worst value is 0

The result is equivalent to executing the following code:

>>> def compute_rank_metric():
...     import numpy as np
...     scores = np.array([[1, 2, 3]])
...     labels = np.array([1, 0, 0])
...     n_classes = scores.shape[1]
...     from scipy.stats import rankdata
...     int_ranks = rankdata(scores, axis=1)
...     filtered_ranks = int_ranks * labels
...     max_ranks = np.max(filtered_ranks, axis=1)
...     float_ranks = (max_ranks - 1) / (n_classes - 1)
...     return float(1 - np.mean(float_ranks))
>>> print(f"{compute_rank_metric():.1f}")
1.0
Parameters:
  • labels (autointent.metrics.custom_types.LABELS_VALUE_TYPE) – ground truth labels for each utterance

  • scores (autointent.metrics.custom_types.SCORES_VALUE_TYPE) – for each utterance, this list contains scores for each of n_classes classes

Returns:

Score of the scoring metric

Return type:

float