GitHub
Cookbook

Hierarchical Context

Blend individual user memory with shared group knowledge.

Domain: Enterprise / Multi-Tenant
Key Concepts: Scoping, Context Fusion, Data Privacy

Problem Statement

In enterprise apps, users belong to groups (Companies, Departments). If User A teaches the agent "Our holiday policy is X," User B should know this immediately. However, User A's private "I like coffee" preference should remain private.

Architecture: Context Fusion

We maintain separate graphs for User and Group, and fuse them at runtime.

Implementation Steps

1. Setting up Scopes

We create a shared "Company" scope and ingest static business data into it.

COMPANY_ID = "org_toyota_north"
# Ingest Inventory into Shared Scope
inventory_data = [
{"model": "Camry", "color": "Red", "type": "Sedan"},
{"model": "Sienna", "color": "Blue", "type": "Minivan"}
]
for item in inventory_data:
client.memory.semantic.add_fact(
content=f"{item['model']} is a {item['type']} available in {item['color']}",
# This memory belongs to the COMPANY, not a specific user
user_id=COMPANY_ID,
kg_name="inventory_kg"
)

2. The Dual-Search Pattern

When a user asks a question, we execute two searches in parallel: one for their personal preferences, and one for the shared inventory.

user_id = "user_alice"
query = "Do you have any red cars for my family?"
# 1. Search User Memory (Personal Preferences)
user_context = client.memory.search(
query=query,
user_id=user_id, # Scope: Alice
limit=2
)
# Result: "Alice has 3 kids" (Implies need for size)
# 2. Search Group Memory (Shared Inventory)
group_context = client.memory.search(
query=query,
user_id=COMPANY_ID, # Scope: Toyota North
limit=5
)
# Result: "Camry (Red)", "Sienna (Blue)"

3. Context Stacking

The final step is formatting these distinct sources so the LLM understands their relationship.

prompt = f"""
<USER_CONTEXT>
{user_context}
(Note: User has 3 kids, so prioritize larger vehicles)
</USER_CONTEXT>
<AVAILABLE_INVENTORY>
{group_context}
</AVAILABLE_INVENTORY>
Based on the inventory and the user's needs, recommend a car.
"""
# The LLM will likely recommend the Minivan despite the color mismatch,
# or the Sedan with a warning about size, effectively reasoning across scopes.

Data Privacy

This architecture ensures strict data segregation. Queries to the COMPANY_ID never return user_alice's private data, and vice versa. You can effectively "wipe" a user's memory without affecting the company knowledge base.