LLM responses#
Responses based on LLM_API calling.
- class LLMResponse(**data)[source]#
Bases:
BaseResponse
Basic function for receiving LLM responses. Uses prompt to produce result from model.
-
history:
int
# Number of dialogue turns aside from the current one to keep in history. -1 for full history.
-
filter_func:
BaseHistoryFilter
# Filter function to filter messages in history.
-
prompt_misc_filter:
str
# Regular expression to find prompts by key names in MISC dictionary.
-
position_config:
Optional
[PositionConfig
]# Config for positions of prompts and messages in history.
-
max_size:
int
# Maximum size of any message in chat in symbols. If a message exceeds the limit it will not be sent to the LLM and a warning will be produced.
-
history: