Skip to content

Providers

Providers are abstractions for different LLM APIs that power your proactive agents.

BaseProvider

proactiveagent.providers.base.BaseProvider

Python
BaseProvider(model: str, **kwargs: Any)

Bases: ABC

Abstract base class for AI providers

Initialize provider

Parameters:

Name Type Description Default
model str

Model name/identifier

required
**kwargs Any

Provider-specific configuration

{}

generate_response abstractmethod async

Python
generate_response(messages: List[Dict[str, str]], system_prompt: Optional[str] = None, triggered_by_user_message: bool = False, **kwargs: Any) -> str

Generate a response from the AI

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dictionaries with 'role' and 'content'

required
system_prompt Optional[str]

Optional system prompt

None
triggered_by_user_message bool

Whether the response was triggered by a user message

False
**kwargs Any

Additional generation parameters

{}

Returns:

Type Description
str

Generated response text

should_respond abstractmethod async

Python
should_respond(messages: List[Dict[str, str]], elapsed_time: int, context: Dict[str, Any]) -> bool

Determine if the AI should respond based on context

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

Conversation history

required
elapsed_time int

Time since last user message (seconds)

required
context Dict[str, Any]

Additional context information

required

Returns:

Type Description
bool

True if AI should respond, False otherwise

calculate_sleep_time abstractmethod async

Python
calculate_sleep_time(wake_up_pattern: str, min_sleep_time: int, max_sleep_time: int, context: Dict[str, Any]) -> tuple[int, str]

Calculate how long to sleep before next wake-up

Parameters:

Name Type Description Default
wake_up_pattern str

User-defined wake-up pattern description

required
min_sleep_time int

Minimum allowed sleep time (seconds)

required
max_sleep_time int

Maximum allowed sleep time (seconds)

required
context Dict[str, Any]

Current conversation context

required

Returns:

Type Description
tuple[int, str]

Tuple of (sleep_time_seconds: int, reasoning: str)

OpenAIProvider

proactiveagent.providers.openai_provider.OpenAIProvider

Python
OpenAIProvider(model: str = 'gpt-3.5-turbo', **kwargs: Any)

Bases: BaseProvider

OpenAI provider for Active AI agents

Initialize OpenAI provider

Parameters:

Name Type Description Default
model str

OpenAI model name

'gpt-3.5-turbo'
**kwargs Any

Additional OpenAI parameters (including api_key)

{}

generate_response async

Python
generate_response(messages: List[Dict[str, str]], system_prompt: Optional[str] = None, triggered_by_user_message: bool = False, **kwargs: Any) -> str

Generate response using OpenAI API

should_respond async

Python
should_respond(messages: List[Dict[str, str]], elapsed_time: int, context: Dict[str, Any], **kwargs: Any) -> bool

Determine if AI should respond using OpenAI decision-making with structured output

calculate_sleep_time async

Python
calculate_sleep_time(wake_up_pattern: str, min_sleep_time: int, max_sleep_time: int, context: Dict[str, Any], **kwargs: Any) -> tuple[int, str]

Calculate sleep time using OpenAI pattern interpretation with structured output