LLM API#

Wrapper around langchain.

class LLM_API(model, system_prompt='', position_config=None)[source]#

Bases: object

This class acts as a wrapper for all LLMs from langchain and handles message exchange between remote model and chatsky classes.

async respond(history, message_schema=None)[source]#

Process and structure the model’s response based on the provided schema.

Parameters:
  • history (list[BaseMessage]) – List of previous messages in the conversation

  • message_schema (Union[None, Type[Message], Type[BaseModel]]) – Schema for structuring the output, defaults to None

Return type:

Message

Returns:

Processed model response

Raises:

ValueError – If message_schema is not None, Message, or BaseModel

async condition(history, method)[source]#

Execute a conditional method on the conversation history.

Parameters:
  • history (list[BaseMessage]) – List of previous messages in the conversation

  • method (BaseMethod) – Method to evaluate the condition

Return type:

bool

Returns:

Boolean result of the condition evaluation

class BaseLLMScriptFunction(**data)[source]#

Bases: BaseModel

Base class for script functions that use an LLM model.

llm_model_name: str#

Key of the model in the models dictionary.

prompt: Prompt#

Script function prompt.

history: int#

Number of dialogue turns aside from the current one to keep in history. -1 for full history.

filter_func: BaseHistoryFilter#

Filter function to filter messages in history.

prompt_misc_filter: str#

Regular expression to find prompts by key names in MISC dictionary.

position_config: Optional[PositionConfig]#

Config for positions of prompts and messages in history.

max_size: int#

Maximum size of any message in chat in symbols. If a message exceeds the limit it will not be sent to the LLM and a warning will be produced.

async _get_langchain_context(ctx)[source]#

Convert Context to langchain messages using get_langchain_context().

Arguments to the function are passed from attributes of this class and from the LLM_API model stored in pipeline:

  1. Model is retrieved from pipeline using llm_model_name;

  2. Model’s system_prompt is executed and passed to get_langchain_context() as system_prompt;

  3. If position_config is None, model’s position_config is used instead;

  4. The rest of the arguments are passed as is.

Parameters:

ctx (Context) – Context object.

Return type:

list[BaseMessage]

Returns:

A list of LangChain messages.

_get_api(ctx)[source]#

Get LLM_API instance for the current model.

Parameters:

ctx (Context) – Context object

Return type:

LLM_API

Returns:

LLM_API instance