autointent.metrics.retrieval.retrieval_precision_intersecting#

autointent.metrics.retrieval.retrieval_precision_intersecting(query_labels, candidates_labels, k=None)#

Calculate the precision at position k for the intersecting labels.

Precision at position \(k\) for intersecting labels is calculated as:

\[\text{Precision@k}_{\text{intersecting}} = \frac{1}{N} \sum_{i=1}^N \frac{\sum_{j=1}^k \mathbb{1} \left( y_{\text{query},i} \cdot y_{\text{candidates},i,j} > 0 \right)}{k}\]

where: - \(N\) is the total number of queries, - \(y_{\text{query},i}\) is the one-hot encoded label vector for the \(i\)-th query, - \(y_{\text{candidates},i,j}\) is the one-hot encoded label vector of the \(j\)-th candidate for the \(i\)-th query, - \(k\) is the number of top candidates considered, - \(\mathbb{1}(\text{condition})\) is the indicator function that equals 1 if the condition is true and 0 otherwise.

Parameters:
  • query_labels (autointent.metrics.custom_types.LABELS_VALUE_TYPE) – For each query, this list contains its class labels

  • candidates_labels (autointent.metrics.custom_types.CANDIDATE_TYPE) – For each query, these lists contain class labels of items ranked by a retrieval model (from most to least relevant)

  • k (int | None) – Number of top items to consider for each query

Returns:

Score of the retrieval metric

Return type:

float