autointent.metrics.scoring_roc_auc#

autointent.metrics.scoring_roc_auc(labels, scores)#

Calculate ROC AUC score for multiclass and multilabel cases.

Macro averaged roc-auc for utterance classification task:

\[\frac{1}{C}\sum_{k=1}^C ROCAUC(scores[:, k], labels[:, k])\]

where C is the number of classes.

Parameters:
  • labels (autointent.metrics.custom_types.LABELS_VALUE_TYPE) – Ground truth labels for each utterance.

  • scores (autointent.metrics.custom_types.SCORES_VALUE_TYPE) – For each utterance, scores for each of n_classes classes.

Returns:

ROC AUC score.

Return type:

float