get_response#

scikitplot.llm_provider.get_response(messages='', model_provider='huggingface', model_id=None, api_key=None, max_history=10, temperature=0.5, max_tokens=256, frequency_penalty=0.5, presence_penalty=0.5, stream=False, **kwargs)[source]#

Get response from LLM provider.

Parameters:
messagesstr | List[Dict[str, str]]

List of chat messages with role and content.

model_providerstr

Model provider name (e.g. ‘openai’, ‘groq’, ‘huggingface’).

model_idstr, optional

Model identifier for the provider. Default selected model_provider list first “model_id”.

api_keystr, optional

API api_key or token used to authenticate with the “model_provider/model_id”. Loaded from .env by model_provider like “HUGGINGFACE_TOKEN” if not provided.

max_historyint

Number of recent messages to retain.

temperaturefloat

Sampling temperature.

max_tokensint

Maximum tokens to generate.

frequency_penaltyfloat

Penalize frequent tokens (OpenAI/Groq only).

presence_penaltyfloat

Encourage/discourage topic diversity (OpenAI/Groq only).

streambool

Whether to stream response in Streamlit.

**kwargsdict, optional

Additional keyword arguments forwarded to the client constructor. - base_url : str, optional

model_provider API format compatible with OpenAI base_url params. Like (OpenAI(api_key=”<DeepSeek API Key>”, base_url=”https://api.deepseek.com”))

Returns:
str or None

Assistant’s reply or error message.

Parameters:
Return type:

str | None