GitHub

Choosing Your SDK

DRIP/KG-RAG offers two powerful SDKs: Functor SDK for REST API access and MCP SDK for Model Context Protocol integration. This guide helps you choose the right one for your use case.

SDK Comparison

F

Functor SDK

REST API Client

Direct access to the DRIP/KG-RAG REST API with full control over all operations.

  • ✅ Simple REST API interface
  • ✅ Full CRUD operations
  • ✅ Sync & async support
  • ✅ Direct HTTP access
  • ✅ Standalone applications
M

MCP SDK

Model Context Protocol

Agent-first interface for LLM applications with advanced memory management.

  • ✅ Agent framework integration
  • ✅ Memory-first operations
  • ✅ LangChain & CAMEL-AI support
  • ✅ Tool-based interface
  • ✅ Multi-agent systems

Detailed Comparison

FeatureFunctor SDKMCP SDK
Primary Use CaseDirect API accessAgent integration
Learning CurveLowMedium
Memory OperationsVia API endpointsFirst-class tools
Agent FrameworksManual integrationBuilt-in adapters
Async Support✅ Full✅ Full
Type Safety✅ Pydantic models✅ Pydantic models
Best ForApps, scripts, servicesAI agents, chatbots

When to Use Functor SDK

✅ Choose Functor SDK if you:

  • Building traditional applications: Web apps, mobile backends, data pipelines
  • Need full API control: Direct access to all REST endpoints
  • Want simplicity: Straightforward request/response patterns
  • Have existing REST integrations: Easy to add to current architecture
  • Building standalone services: Microservices, batch processors
  • Prefer HTTP semantics: Familiar REST patterns

Code Example

from functor_sdk import FunctorClient
# Simple and direct
client = FunctorClient(api_key="your-key")
# Upload document
upload = client.ingestion.upload_url(
url="https://example.com/doc.pdf",
kg_name="KG_Universal"
)
# Query knowledge
result = client.queries.execute("What is in the document?")
print(result.answer)
# Manage sources
sources = client.sources.list_for_kg("KG_Universal")
for source in sources:
print(f"{source.source_name}: {source.chunks_count} chunks")

When to Use MCP SDK

✅ Choose MCP SDK if you:

  • Building AI agents: LangChain, CAMEL-AI, or custom agent systems
  • Need memory management: Episodic, semantic, and procedural memory
  • Want tool-based interfaces: MCP tools for agent frameworks
  • Building multi-agent systems: Coordinated agent interactions
  • Need advanced memory features: Memory consolidation, retrieval strategies
  • Prefer declarative patterns: Tool definitions and prompts

Code Example

from drip_mcp import DripClient
# Agent-friendly interface
client = DripClient(
server_url="http://localhost:8000",
api_key="your-key"
)
client.connect()
# Memory-first operations
episode_id = client.add_episode(
content="User asked about machine learning",
user_id="alice",
metadata={"category": "question"}
)
# Retrieve with memory context
result = client.retrieve_memory(
query="What did Alice ask about?",
kg_name="KG_Universal",
user_id="alice"
)
# Add semantic facts
client.add_fact(
fact="Machine learning is a subset of AI",
category="knowledge"
)

Use Case Examples

Scenario 1: Data Processing Pipeline

Recommendation: Functor SDK

A batch processing system that ingests documents, processes them, and indexes into knowledge graphs. Needs direct control over API operations with error handling.

Scenario 2: Intelligent Chatbot

Recommendation: MCP SDK

A conversational AI that remembers past interactions, retrieves relevant knowledge, and maintains user context across sessions. Needs episodic memory and semantic retrieval.

Scenario 3: Research Assistant

Recommendation: Both

Use Functor SDK for document ingestion and management. Use MCP SDK for the AI assistant that helps users explore and query the research database.

Scenario 4: Analytics Dashboard

Recommendation: Functor SDK

A web dashboard showing knowledge graph statistics, source management, and system health. Needs direct access to all API endpoints with minimal abstraction.

Using Both SDKs Together

You can use both SDKs in the same application for different purposes:

# Use Functor SDK for data management
from functor_sdk import FunctorClient
data_client = FunctorClient(api_key="your-key")
# Batch upload documents
for file_path in document_paths:
data_client.ingestion.upload_file(
file_path=file_path,
kg_name="KG_Universal"
)
# Use MCP SDK for agent interactions
from drip_mcp import DripClient
agent_client = DripClient(
server_url="http://localhost:8000",
api_key="your-key"
)
agent_client.connect()
# Agent queries with memory
result = agent_client.retrieve_memory(
query="What documents were uploaded today?",
user_id="alice"
)
print(result)

Migration Between SDKs

From Functor SDK to MCP SDK

# Before: Functor SDK
from functor_sdk import FunctorClient
client = FunctorClient()
result = client.queries.execute("What is AI?")
# After: MCP SDK
from drip_mcp import DripClient
client = DripClient(server_url="http://localhost:8000")
client.connect()
result = client.retrieve_memory(query="What is AI?", kg_name="KG_Universal")

From MCP SDK to Functor SDK

# Before: MCP SDK
from drip_mcp import DripClient
client = DripClient(server_url="http://localhost:8000")
client.connect()
result = client.ingest_memory(content="AI document", kg_name="KG_Universal")
# After: Functor SDK
from functor_sdk import FunctorClient
client = FunctorClient()
result = client.ingestion.upload_url(
url="https://example.com/ai-document.pdf",
kg_name="KG_Universal"
)

Quick Decision Tree

Choose Your SDK:

1.
Are you building an AI agent or chatbot?

→ Yes? Use MCP SDK

→ No? Continue to #2

2.
Do you need episodic memory management?

→ Yes? Use MCP SDK

→ No? Continue to #3

3.
Are you integrating with LangChain or CAMEL-AI?

→ Yes? Use MCP SDK

→ No? Continue to #4

4.
Do you need direct REST API access?

→ Yes? Use Functor SDK

→ Not sure? Use Functor SDK (easier to start)

Next Steps