This guide will walk you through creating your first proactive AI agent. By the end, you'll understand how to build agents that can initiate conversations, make timing decisions, and maintain engagement autonomously.
importtimefromproactiveagentimportProactiveAgent,OpenAIProvideragent=ProactiveAgent(provider=OpenAIProvider(model="gpt-5-nano"),system_prompt="ou are a casual bored teenager. Answer like you're texting a friend.",decision_config={'wake_up_pattern':"Use the pace of a normal text chat",},)agent.add_callback(lambdaresponse:print(f"🤖 AI: {response}"))agent.start()print("Chat started! Type your messages:")whileTrue:message=input("You: ").strip()ifmessage.lower()=='quit':breakagent.send_message(message)time.sleep(3)agent.stop()print("Chat ended!")
This creates a conversational agent that:
Operates on an autonomous schedule
Decides when to respond based on conversation context
Maintains natural conversation flow over time
Key Parameters:
provider - Choose which provider you want to use.
decision_config - Configuration dictionary that controls the agent's behavior
wake_up_pattern - Define in natural language how to determine sleep intervals between wake cycles (used by the default AI-based sleep calculator)
importtimefromproactiveagentimportProactiveAgent,OpenAIProviderdefon_response(response:str):"""Called when AI sends a response"""print(f"🤖 AI: {response}")defon_sleep(sleep_time:int,reasoning:str):"""Called when AI calculates sleep time"""print(f"⏰ Sleep: {sleep_time}s - {reasoning}")defon_decision(should_respond:bool,reasoning:str):"""Called when AI makes a decision about whether to respond"""status="RESPOND"ifshould_respondelse"WAIT"print(f"🧠{status} - {reasoning}")agent=ProactiveAgent(provider=OpenAIProvider(model="gpt-5-nano"),system_prompt="ou are a casual bored teenager. Answer like you're texting a friend.",decision_config={'wake_up_pattern':"Use the pace of a normal text chat",},)agent.add_callback(on_response)agent.add_sleep_time_callback(on_sleep)agent.add_decision_callback(on_decision)agent.start()print("Chat started! Type your messages:")whileTrue:message=input("You: ").strip()ifmessage.lower()=='quit':breakagent.send_message(message)time.sleep(3)agent.stop()print("Chat ended!")
Callbacks are useful for debugging, logging, and integrating with external systems.
fromproactiveagentimportProactiveAgent,OpenAIProviderfromproactiveagent.decision_enginesimportFunctionBasedDecisionEnginedefcustom_decision_function(messages,last_user_message_time,context,config,triggered_by_user_message):"""Custom decision logic with specific timing rules"""importtimecurrent_time=time.time()elapsed_time=int(current_time-last_user_message_time)# Respond immediately to new user messagesiftriggered_by_user_message:returnTrue,"User just sent a message"# Wait at least 60 seconds between responsesifelapsed_time<60:returnFalse,f"Too soon - wait {60-elapsed_time}s more"# Respond if we've been quiet for more than 2 minutesifelapsed_time>120:returnTrue,f"Been quiet for {elapsed_time}s - time to respond"returnFalse,"Waiting for good timing"provider=OpenAIProvider(model="gpt-5-nano")decision_engine=FunctionBasedDecisionEngine(custom_decision_function)agent=ProactiveAgent(provider=provider,decision_engine=decision_engine,system_prompt="You are a helpful AI assistant.",decision_config={'min_response_interval':60,'max_response_interval':300,})
The function receives conversation state and returns a tuple: (should_respond: bool, reasoning: str).
Why AI-based is the default: LLMs excel at evaluating conversation context, engagement levels, and timing appropriateness—judgments that are difficult to encode in simple rules.
importtimefromproactiveagentimportProactiveAgent,OpenAIProviderfromproactiveagent.sleep_time_calculatorsimportStaticSleepCalculatordefon_ai_response(response:str):print(f"🤖 AI: {response}")defon_sleep_time_calculated(sleep_time:int,reasoning:str):print(f"⏰ Sleep: {sleep_time}s - {reasoning}")defmain():provider=OpenAIProvider(model="gpt-5-nano")# Use static sleep calculator with fixed 2-minute intervalssleep_calculator=StaticSleepCalculator(120,"Fixed 2-minute intervals")agent=ProactiveAgent(provider=provider,system_prompt="You are a helpful AI assistant.",decision_config={'min_sleep_time':60,'max_sleep_time':300,})# Override the default sleep calculatoragent.scheduler.sleep_calculator=sleep_calculatoragent.add_callback(on_ai_response)agent.add_sleep_time_callback(on_sleep_time_calculated)agent.start()print("=== Static Sleep Calculator ===")print("Always sleeps for 2 minutes between checks.")print("Type 'quit' to exit.\n")whileTrue:message=input("You: ").strip()ifmessage.lower()=='quit':breakagent.send_message(message)time.sleep(1)agent.stop()if__name__=="__main__":main()
agent=ProactiveAgent(provider=provider,system_prompt="You are a helpful AI assistant.",decision_config={# Response timing parameters"min_response_interval":30,"max_response_interval":600,# Decision-making weights and thresholds"engagement_threshold":0.5,"engagement_high_threshold":10,"engagement_medium_threshold":3,"context_relevance_weight":0.4,"time_weight":0.3,"probability_weight":0.3,# Sleep calculation parameters"wake_up_pattern":"Check every 2-3 minutes when active","min_sleep_time":30,"max_sleep_time":600,},)
For comprehensive configuration examples, see the examples/configs/ directory.