LLM methods#

This module provides basic methods to support LLM conditions. These methods return bool values based on LLM result.

class BaseMethod(**data)[source]#

Bases: BaseModel, ABC

Base class to evaluate models response as condition.

abstract async __call__(ctx, model_result)[source]#

Determine if result of an LLM invocation satisfies the condition of this method.

Parameters:
  • ctx (Context) – Current dialog context.

  • model_result (LLMResult) – Result of langchain model’s invoke.

Return type:

bool

model_result_to_text(model_result)[source]#

Extract text from raw model result.

Return type:

str

class Contains(**data)[source]#

Bases: BaseMethod

Simple method to check if a string contains a pattern.

pattern: str#

Pattern that will be searched in model_result.

async __call__(ctx, model_result)[source]#
Return type:

bool

Returns:

True if pattern is contained in model_result.

class LogProb(**data)[source]#

Bases: BaseMethod

Method to check whether a target token’s log probability is higher than a threshold.

target_token: str#

Token to check (e.g. “TRUE”)

threshold: float#

Threshold to bypass. by default -0.5

async __call__(ctx, model_result)[source]#
Return type:

bool

Returns:

True if logprob of the token is higher than threshold.