GitHub

LangChain Integration

Build LangChain agents with persistent memory using Functor tools.

Installation

Install both the Functor SDK and LangChain:

pip install functor-sdk langchain langchain-openai

Quick Start

Step 1: Get Functor Tools

Use the convenience function to get all memory tools as LangChain tools:

from functor_sdk import FunctorClient
from functor_sdk.tools import get_functor_tools
# Initialize client
client = FunctorClient(api_key="your-functor-api-key")
# Get all 71 memory tools as LangChain StructuredTools
tools = get_functor_tools(client)
print(f"Loaded {len(tools)} LangChain tools")

Step 2: Create Agent

Create a LangChain agent with the memory tools:

from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.prompts import PromptTemplate
# Initialize LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)
# Create ReAct prompt
prompt = PromptTemplate.from_template("""You are a helpful assistant with persistent memory.
You have access to memory tools that let you:
- Store and recall conversations (episodic memory)
- Learn and remember facts (semantic memory)
- Save procedures and workflows (procedural memory)
- Remember user preferences (personalization)
Use these tools to provide personalized, context-aware responses.
Tools: {tools}
Tool Names: {tool_names}
Question: {input}
{agent_scratchpad}""")
# Create agent
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

Step 3: Run Agent

Your agent now has persistent memory:

# Store a preference
response = agent_executor.invoke({
"input": "Remember that I prefer dark mode in all applications"
})
print(response["output"])
# Later, recall the preference
response = agent_executor.invoke({
"input": "What are my UI preferences?"
})
print(response["output"])

Manual Tool Generation

For more control, you can manually generate LangChain tools:

from functor_sdk import FunctorClient
from functor_sdk.tools import (
ToolRegistry,
FunctorToolContext,
generate_langchain_tools,
)
# Initialize
client = FunctorClient(api_key="your-api-key")
registry = ToolRegistry()
ctx = FunctorToolContext(client)
# Discover tools
registered_tools = registry.discover(client.memory)
# Generate LangChain tools
langchain_tools = generate_langchain_tools(registered_tools, ctx)
# Filter to specific namespaces if needed
episodic_tools = [t for t in langchain_tools if "episodic" in t.name]
print(f"Episodic tools: {len(episodic_tools)}")

Complete Example: Conversational Agent

Here's a complete example of a conversational agent that remembers context:

import asyncio
from functor_sdk import FunctorClient
from functor_sdk.tools import get_functor_tools
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Initialize Functor client
functor_client = FunctorClient(api_key="your-functor-key")
# Get memory tools
tools = get_functor_tools(functor_client)
# Initialize LLM
llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Create prompt with memory awareness
prompt = ChatPromptTemplate.from_messages([
("system", """You are a helpful assistant with persistent memory capabilities.
At the START of each conversation:
1. Use functor_episodic_search to find relevant past conversations
2. Use functor_personalization_get_context to load user preferences
During the conversation:
- Store important information using functor_semantic_add_fact
- Update user preferences with functor_personalization_add_preference
- Log significant interactions with functor_episodic_create
Be proactive about using memory to provide personalized responses."""),
MessagesPlaceholder(variable_name="chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# Create agent
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
)
async def chat(user_id: str):
"""Interactive chat with memory."""
print("Chat with memory-enabled agent. Type 'quit' to exit.\n")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
response = await agent_executor.ainvoke({
"input": f"[User: {user_id}] {user_input}",
})
print(f"\nAssistant: {response['output']}\n")
# Run
asyncio.run(chat("user-123"))

Tool Selection

You can select specific tools instead of loading all 71:

from functor_sdk.tools import get_functor_tools
# Get all tools
all_tools = get_functor_tools(client)
# Filter by namespace
episodic_tools = [t for t in all_tools if t.name.startswith("functor_episodic")]
semantic_tools = [t for t in all_tools if t.name.startswith("functor_semantic")]
personalization_tools = [t for t in all_tools if t.name.startswith("functor_personalization")]
# Create focused toolset
focused_tools = episodic_tools + semantic_tools + personalization_tools
print(f"Using {len(focused_tools)} tools for conversational agent")

LangGraph Integration

Functor tools work seamlessly with LangGraph for complex workflows:

from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
import operator
# Define state
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
# Get Functor tools
functor_tools = get_functor_tools(client)
# Create tool node
tool_node = ToolNode(functor_tools)
# Build graph
workflow = StateGraph(AgentState)
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def call_model(state):
messages = state["messages"]
response = llm.invoke(messages)
return {"messages": [response]}
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
workflow.add_edge("tools", "agent")
# Compile
app = workflow.compile()

Best Practices

Memory-Aware Prompting

  • Instruct the agent to check memory at conversation start
  • Store important facts immediately when learned
  • Use episodic memory for conversation history
  • Use semantic memory for facts and knowledge
  • Use personalization for user-specific preferences

Error Handling

from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True, # Handle LLM output parsing errors
max_iterations=10, # Prevent infinite loops
return_intermediate_steps=True, # Debug tool calls
)
try:
response = agent_executor.invoke({"input": "..."})
except Exception as e:
print(f"Agent error: {e}")
# Gracefully handle memory unavailability

Next Steps