Inference Pipeline#

After you configured optimal pipeline with AutoIntent, you probably want to test its power on some new data! There are several options:

  • use it right after optimization

  • save to file system and then load

Right After#

Here’s the basic example:

[1]:
from autointent import Dataset, Pipeline

search_space = [
    {
        "node_type": "scoring",
        "target_metric": "scoring_roc_auc",
        "search_space": [
            {
                "module_name": "knn",
                "k": [1],
                "weights": ["uniform"],
                "embedder_config": ["avsolatorio/GIST-small-Embedding-v0"],
            },
        ],
    },
    {
        "node_type": "decision",
        "target_metric": "decision_accuracy",
        "search_space": [
            {"module_name": "threshold", "thresh": [0.5]},
            {"module_name": "argmax"},
        ],
    },
]

dataset = Dataset.from_hub("DeepPavlov/clinc150_subset")
pipeline = Pipeline.from_search_space(search_space)
context = pipeline.fit(dataset)
pipeline.predict(["hello, world!"])
/opt/hostedtoolcache/Python/3.10.18/x64/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
Memory storage is not compatible with resuming optimization. Modules from previous runs won't be available. Set dump_modules=True in LoggingConfig to enable proper resuming.
Storage directory must be provided for study persistence.
[I 2025-08-01 06:20:46,613] A new study created in memory with name: scoring
Storage directory must be provided for study persistence.
"argmax" is NOT designed to handle OOS samples, but your data contains it. So, using this method reduces the power of classification.
/opt/hostedtoolcache/Python/3.10.18/x64/lib/python3.10/site-packages/sklearn/metrics/_classification.py:1731: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, f"{metric.capitalize()} is", result.shape[0])
[1]:
[3]

There are several caveats.

RAM usage.

You can optimize RAM usage by saving all modules to file system. Just set the following options:

[2]:
from autointent.configs import LoggingConfig

logging_config = LoggingConfig(dump_modules=True, clear_ram=True)

Load from File System#

Firstly, your auto-configuration run should dump modules into file system:

[3]:
from autointent import Dataset, Pipeline
from autointent.configs import LoggingConfig

dataset = Dataset.from_hub("DeepPavlov/clinc150_subset")
pipeline = Pipeline.from_search_space(search_space)
pipeline.set_config(LoggingConfig(dump_modules=True, clear_ram=True))

Secondly, after optimization finished, you need to save the auto-configuration results to file system:

[4]:
context = pipeline.fit(dataset)
context.dump()
# or pipeline.dump() to save only configured pipeline but not all the optimization assets
"argmax" is NOT designed to handle OOS samples, but your data contains it. So, using this method reduces the power of classification.
/opt/hostedtoolcache/Python/3.10.18/x64/lib/python3.10/site-packages/sklearn/metrics/_classification.py:1731: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, f"{metric.capitalize()} is", result.shape[0])

This command saves all results to the run’s directory:

[5]:
run_directory = context.logging_config.dirpath
run_directory
[5]:
PosixPath('/tmp/tmpu1fqrrs0/4c9742d631de3a7932374b15a891660ea1586afb/docs/source/user_guides/runs/magnificent_tyrannosaurus_08-01-2025_06-20-54')

After that, you can load pipeline for inference:

[6]:
loaded_pipeline = Pipeline.load(run_directory)
loaded_pipeline.predict(["hello, world!"])
[6]:
[3]

That’s all!#