autointent.metrics.retrieval.retrieval_map_intersecting#
- autointent.metrics.retrieval.retrieval_map_intersecting(query_labels, candidates_labels, k=None)#
Calculate the mean average precision at position k for the intersecting labels.
The Mean Average Precision (MAP) for intersecting labels is computed as the average of the average precision (AP) scores for all queries. The average precision for a single query considers the intersecting true and predicted labels for the top-k retrieved items.
MAP is given by:
\[\text{MAP} = \frac{1}{Q} \sum_{q=1}^{Q} \text{AP}_{\text{intersecting}}(q, c, k)\]where: - \(Q\) is the total number of queries, - \(\text{AP}_{\text{intersecting}}(q, c, k)\) is the average precision for the \(q\)-th query, calculated using the intersecting true labels (q), predicted labels (c), and the number of top items (k) to consider.
- Parameters:
query_labels (autointent.metrics.custom_types.LABELS_VALUE_TYPE) – For each query, this list contains its class labels
candidates_labels (autointent.metrics.custom_types.CANDIDATE_TYPE) – For each query, these lists contain class labels of items ranked by a retrieval model (from most to least relevant)
k (int | None) – Number of top items to consider for each query
- Returns:
Score of the retrieval metric
- Return type: