GitHub
Cookbook

Stateful Agent

Build an 'omniscient' customer support agent that knows user history before the first message.

Domain: Customer Support
Key Concepts: Data Ingestion, Context Awareness, Identity Resolution

Problem Statement

The "Cold Start" problem plagues chatbots. A user says "I can't log in," and the bot asks "What's your username?" But your database already knows their account is suspended. A Stateful Agent bridges this gap by checking historical data before responding.

Architecture: Write-then-Read Pattern

This pattern involves "priming" the memory with historical data, and then using the user's live input to query that history.

Implementation Steps

1. Priming: Ingesting Heterogeneous Data

We ingest structured data (JSON) and unstructured data (Chat Logs) into the same Knowledge Graph. This allows the LLM to reason across both.

import json
from functor_sdk import FunctorClient
client = FunctorClient()
USER_ID = "user_123"
# 1. Ingest Structured Data (e.g., from Stripe/Database)
event_log = {
"event": "payment_failed",
"reason": "card_expired",
"timestamp": "2023-10-25T10:00:00Z"
}
# We convert this JSON into a semantic fact
client.memory.semantic.add_fact(
content=f"Payment failed on {event_log['timestamp']} because {event_log['reason']}",
user_id=USER_ID,
kg_name="support_kg",
metadata={"source": "stripe", "type": "billing"}
)
# 2. Ingest Unstructured History (Old Tickets)
old_ticket = "User complained about login button not working on Firefox."
client.memory.episodic.create(
user_id=USER_ID,
summary=old_ticket,
event_type="ticket_history",
timestamp="2023-09-01T10:00:00Z"
)

2. The Interactive Loop

When the user chats, we query both memory types. The specific query "can't log in" will semantic-match with the "payment failed" fact we just ingested.

async def handle_user_message(user_message, user_id):
# 1. Retrieve Context
# Functor automatically searches KG, Semantic, and Episodic memory
context = client.memory.get_context(
query=user_message,
user_id=user_id,
kg_names=["support_kg"]
)
# 2. Construct System Prompt
system_prompt = f"""
You are a helpful support agent.
CONTEXT FROM HISTORY:
{context.formatted_string}
Current User Message: {user_message}
"""
# 3. Call LLM
response = await llm.generate(system_prompt)
return response
# Example Run:
# User: "I can't log in."
# Context Found: "Payment failed due to card expiration."
# Agent Response: "I see your account is paused because your card expired. Would you like to update it?"

Key Takeaways

  • Unified Memory: Mixing rigid database events with messy chat history gives the agent a complete picture.
  • Proactive Support: The agent suggests the solution (update card) instead of asking for the problem.
  • Identity Resolution: All data is keyed to user_id, ensuring data privacy and relevance.