autointent.modules.decision.TunableDecision#
- class autointent.modules.decision.TunableDecision(target_metric='decision_accuracy', n_optuna_trials=320, seed=0, tags=None)#
Bases:
autointent.modules.base.BaseDecision
Tunable predictor module.
TunableDecision uses an optimization process to find the best thresholds for predicting labels in single-label or multi-label classification tasks. It is designed for datasets with varying score distributions and supports out-of-scope (OOS) detection.
- Parameters:
target_metric (MetricType) – Metric to optimize during threshold tuning
n_optuna_trials (pydantic.PositiveInt) – Number of optimization trials
seed (int | None) – Random seed for reproducibility
tags (list[autointent.schemas.Tag] | None) – Tags for predictions (if any)
Examples:#
Single-label classification#
import numpy as np from autointent.modules import TunableDecision scores = np.array([[0.2, 0.8], [0.6, 0.4], [0.1, 0.9]]) labels = [1, 0, 1] predictor = TunableDecision(n_optuna_trials=100, seed=42) predictor.fit(scores, labels) test_scores = np.array([[0.3, 0.7], [0.5, 0.5]]) predictions = predictor.predict(test_scores) print(predictions)
[1, 0]
Multi-label classification#
labels = [[1, 0], [0, 1], [1, 1]] predictor = TunableDecision(n_optuna_trials=100, seed=42) predictor.fit(scores, labels) test_scores = np.array([[0.3, 0.7], [0.6, 0.4]]) predictions = predictor.predict(test_scores) print(predictions)
[[1, 0], [1, 0]]
- name = 'tunable'#
Name of the module.
- supports_multilabel = True#
Whether the module supports multilabel classification
- supports_multiclass = True#
Whether the module supports multiclass classification
- supports_oos = True#
Whether the module supports oos data
- tags: list[autointent.schemas.Tag] | None#
- target_metric = 'decision_accuracy'#
- n_optuna_trials = 320#
- seed = 0#
- classmethod from_context(context, target_metric='decision_accuracy', n_optuna_trials=320)#
Initialize from context.
- Parameters:
context (autointent.context.Context) – Context containing configurations and utilities
target_metric (MetricType) – Metric to optimize during threshold tuning
n_optuna_trials (pydantic.PositiveInt) – Number of optimization trials
- Return type:
- fit(scores, labels, tags=None)#
Fit the predictor by optimizing thresholds.
Note: When data doesn’t contain out-of-scope utterances, using TunableDecision imposes unnecessary computational overhead.
- Parameters:
scores (numpy.typing.NDArray[Any]) – Array of shape (n_samples, n_classes) with predicted scores
labels (autointent.custom_types.ListOfGenericLabels) – List of true labels
tags (list[autointent.schemas.Tag] | None) – Tags for predictions (if any)
- Return type:
None
- predict(scores)#
Predict labels using optimized thresholds.
- Parameters:
scores (numpy.typing.NDArray[Any]) – Array of shape (n_samples, n_classes) with predicted scores
- Returns:
Predicted labels (either single-label or multi-label)
- Raises:
MismatchNumClassesError – If number of classes in scores doesn’t match training data
- Return type:
autointent.custom_types.ListOfGenericLabels