Companion with Memory
Build a fitness coach that remembers goals, injuries, and preferences across sessions.
Problem Statement
Standard LLMs are stateless. If a user tells a fitness bot "I hurt my knee" on Tuesday, the bot shouldn't suggest squats on Thursday. Creating a true companion requires a memory system that can:
- Persist critical facts (injuries, goals) across app restarts.
- Filter noise (ignore "hello", "ok thanks") to save costs.
- Organize memories by type (goals vs. constraints).
Architecture: The Memory Loop
The core of this pattern is the Retrieve-Generate-Store loop that runs on every interaction.
Implementation Steps
1. Initializing Memory
First, we check if the user has a history. If not, we can "seed" the memory with initial goals.
2. The Retrieval Phase
Before calling the LLM, we search for relevant context. Notice we can filter by relevance or recency.
3. Storing Interactions
After the conversation, we save the interaction. We distinguish between Episodic (the conversation flow) and Semantic (facts extracted).
Advanced Features
Time-Bound Memories
Some facts expire. An injury is temporary. Use ttl_days or explicit expiration dates to prevent stale context.
Memory Categorization
Tagging memories allows for precise retrieval. A "Planning Agent" might only need goals, while a "Support Agent" needs preferences.