LLM responses#

Responses based on LLM_API calling.

class LLMResponse(**data)[source]#

Bases: BaseResponse

Basic function for receiving LLM responses. Uses prompt to produce result from model.

llm_model_name: str#

Key of the model in the models dictionary.

prompt: Prompt#

Response prompt.

history: int#

Number of dialogue turns aside from the current one to keep in history. -1 for full history.

filter_func: BaseHistoryFilter#

Filter function to filter messages in history.

prompt_misc_filter: str#

Regular expression to find prompts by key names in MISC dictionary.

position_config: Optional[PositionConfig]#

Config for positions of prompts and messages in history.

message_schema: Union[None, Type[Message], Type[BaseModel]]#

Schema for model output validation.

max_size: int#

Maximum size of any message in chat in symbols. If a message exceeds the limit it will not be sent to the LLM and a warning will be produced.

async call(ctx)[source]#

Implement this to create a custom function.

Return type:

Message