autointent.metrics#

All the metrics for regexp, retrieval, scoring and decision nodes.

Submodules#

Attributes#

Classes#

DecisionMetricFn

Protocol for decision metrics.

RegexpMetricFn

Protocol for regexp metrics.

RetrievalMetricFn

Protocol for retrieval metrics.

ScoringMetricFn

Protocol for scoring metrics.

Functions#

decision_accuracy(y_true, y_pred)

Calculate decision accuracy. Supports both multiclass and multilabel.

decision_f1(y_true, y_pred)

Calculate decision f1 score. Supports both multiclass and multilabel.

decision_precision(y_true, y_pred)

Calculate decision precision. Supports both multiclass and multilabel.

decision_recall(y_true, y_pred)

Calculate decision recall. Supports both multiclass and multilabel.

decision_roc_auc(y_true, y_pred)

Calculate ROC AUC for multiclass and multilabel classification.

regexp_partial_accuracy(y_true, y_pred)

Calculate regexp partial accuracy.

regexp_partial_precision(y_true, y_pred)

Calculate regexp partial precision.

retrieval_hit_rate(query_labels, candidates_labels[, k])

Calculate the hit rate at position k.

retrieval_hit_rate_intersecting(query_labels, ...[, k])

Calculate the hit rate at position k for the intersecting labels.

retrieval_hit_rate_macro(query_labels, candidates_labels)

Calculate the hit rate at position k for the intersecting labels.

retrieval_map(query_labels, candidates_labels[, k])

Calculate the mean average precision at position k.

retrieval_map_intersecting(query_labels, candidates_labels)

Calculate the mean average precision at position k for the intersecting labels.

retrieval_map_macro(query_labels, candidates_labels[, k])

Calculate the mean average precision at position k for the intersecting labels.

retrieval_mrr(query_labels, candidates_labels[, k])

Calculate the Mean Reciprocal Rank (MRR) at position k.

retrieval_mrr_intersecting(query_labels, candidates_labels)

Calculate the Mean Reciprocal Rank (MRR) at position k for the intersecting labels.

retrieval_mrr_macro(query_labels, candidates_labels[, k])

Calculate the Mean Reciprocal Rank (MRR) at position k for the intersecting labels.

retrieval_ndcg(query_labels, candidates_labels[, k])

Calculate the Normalized Discounted Cumulative Gain (NDCG) at position k.

retrieval_ndcg_intersecting(query_labels, ...[, k])

Calculate the Normalized Discounted Cumulative Gain (NDCG) at position k for the intersecting labels.

retrieval_ndcg_macro(query_labels, candidates_labels)

Calculate the Normalized Discounted Cumulative Gain (NDCG) at position k for the intersecting labels.

retrieval_precision(query_labels, candidates_labels[, k])

Calculate the precision at position k.

retrieval_precision_intersecting(query_labels, ...[, k])

Calculate the precision at position k for the intersecting labels.

retrieval_precision_macro(query_labels, candidates_labels)

Calculate the precision at position k for the intersecting labels.

scoring_accuracy(labels, scores)

Calculate accuracy for multiclass and multilabel classification.

scoring_f1(labels, scores)

Calculate the F1 score for multiclass and multilabel classification.

scoring_hit_rate(labels, scores)

Calculate the hit rate for multilabel classification.

scoring_log_likelihood(labels, scores[, eps])

Supports multiclass and multilabel cases.

scoring_map(labels, scores)

Calculate the mean average precision (MAP) score for multilabel classification.

scoring_neg_coverage(labels, scores)

Supports multilabel classification.

scoring_neg_ranking_loss(labels, scores)

Supports multilabel.

scoring_precision(labels, scores)

Calculate precision for multiclass and multilabel classification.

scoring_recall(labels, scores)

Calculate recall for multiclass and multilabel classification.

scoring_roc_auc(labels, scores)

Supports multiclass and multilabel cases.

Package Contents#

autointent.metrics.METRIC_FN#