Integrating MCP with Agent Frameworks (JavaScript)
A practical checklist for adding persistent memory to your AI agents. Step-by-step integration guides for LangChain.js, Vercel AI SDK, and other popular JavaScript frameworks.
The Integration Challenge
You've built an AI agent. It works great—until the next session, when it forgets everything. Sound familiar? Adding persistent memory to agent frameworks shouldn't require a PhD in distributed systems. This guide provides a quick integration checklist for the most popular JavaScript agent frameworks.
Whether you're using LangChain.js, Vercel AI SDK, or building your own agent from scratch, the pattern is the same: connect to CodeMem via MCP, and your agent gains long-term memory in minutes.
Quick Integration Checklist
Before diving into framework-specific code, here's your universal checklist:
- ☐ Get API Key: Sign up at app.codemem.dev
- ☐ Install SDK:
npm install @codemem/mcp-client - ☐ Initialize Client: Create connection with your API key
- ☐ Add Memory Hook: Store insights after each agent action
- ☐ Add Recall Hook: Search relevant memories before decisions
- ☐ Tag by Context: Use project/user tags for isolation
- ☐ Test Persistence: Verify memories survive restarts
LangChain.js Integration
LangChain.js is the most popular agent framework in the JavaScript ecosystem. Here's how to add CodeMem as a memory layer:
import { CodeMemClient } from '@codemem/mcp-client';
import { AgentExecutor } from 'langchain/agents';
import { ChatOpenAI } from '@langchain/openai';
// Initialize CodeMem
const memory = new CodeMemClient({
apiKey: process.env.CODEMEM_API_KEY,
project: 'my-agent'
});
// Create memory-aware tools
const memoryTools = [
{
name: 'remember',
description: 'Store important information for future reference',
func: async (input: string) => {
await memory.add({ content: input });
return 'Stored in memory.';
}
},
{
name: 'recall',
description: 'Search past memories for relevant context',
func: async (query: string) => {
const results = await memory.search({ query, limit: 5 });
return results.map(m => m.content).join('\n');
}
}
];
// Add to your agent's tool list
const agent = await AgentExecutor.fromAgentAndTools({
agent: yourAgent,
tools: [...yourTools, ...memoryTools]
}); Checklist for LangChain.js:
- ✓ Add memory tools to agent executor
- ✓ Use
ConversationBufferMemoryfor session + CodeMem for long-term - ✓ Tag memories with chain/agent identifiers
Vercel AI SDK Integration
The Vercel AI SDK takes a streaming-first approach. Here's how to integrate CodeMem with its tool system:
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { CodeMemClient } from '@codemem/mcp-client';
const memory = new CodeMemClient({
apiKey: process.env.CODEMEM_API_KEY
});
const result = await generateText({
model: openai('gpt-4-turbo'),
tools: {
addMemory: tool({
description: 'Store information for future sessions',
parameters: z.object({
content: z.string(),
tags: z.array(z.string()).optional()
}),
execute: async ({ content, tags }) => {
await memory.add({ content, tags });
return { success: true };
}
}),
searchMemory: tool({
description: 'Recall stored information',
parameters: z.object({
query: z.string(),
limit: z.number().optional()
}),
execute: async ({ query, limit = 5 }) => {
return await memory.search({ query, limit });
}
})
},
prompt: 'Help me plan my project architecture...'
}); Checklist for Vercel AI SDK:
- ✓ Define memory tools with Zod schemas
- ✓ Use
maxToolRoundtripsto allow memory searches before responses - ✓ Consider streaming implications for memory writes
Custom Agent Integration
Building your own agent? Here's the minimal pattern for adding memory:
import { CodeMemClient } from '@codemem/mcp-client';
class MemoryAwareAgent {
private memory: CodeMemClient;
constructor(apiKey: string) {
this.memory = new CodeMemClient({ apiKey });
}
async run(userInput: string): Promise<string> {
// 1. Recall relevant context
const context = await this.memory.search({
query: userInput,
limit: 3
});
// 2. Build prompt with memories
const prompt = this.buildPrompt(userInput, context);
// 3. Get LLM response
const response = await this.llm.generate(prompt);
// 4. Extract and store new memories
const newMemories = this.extractMemories(response);
for (const mem of newMemories) {
await this.memory.add(mem);
}
return response;
}
} Framework Compatibility Matrix
Quick reference for framework-specific considerations:
| Framework | Integration Point | Notes |
|---|---|---|
| LangChain.js | Custom Tools | Works with all agent types |
| Vercel AI SDK | Tool Functions | Native TypeScript support |
| AutoGen.js | Function Registry | Multi-agent memory sharing |
| Custom Agents | Direct SDK | Full control over memory flow |
Best Practices
- Search before acting: Always recall context before making decisions
- Be selective: Don't store everything—focus on decisions, preferences, and outcomes
- Use semantic tagging: Tags like
architecture,error-pattern,user-preferenceimprove retrieval - Version your memories: Include timestamps or version tags for evolving projects
- Handle failures gracefully: Memory operations shouldn't break agent flow
Common Pitfalls
- Memory bloat: Storing raw conversations instead of distilled insights
- Missing context isolation: Not tagging by project/user leads to cross-contamination
- Synchronous blocks: Memory ops are async—don't block the response stream
- Over-retrieval: Fetching 50 memories when 5 would suffice wastes tokens
Ready to Add Memory to Your Agent?
Get started with CodeMem in under 5 minutes. Your agents deserve to remember.
Get Your API Key →Next Steps
Now that you know the integration patterns:
- Read MCP 101: Quickstart for the fundamentals
- Explore Memory Layers to understand when to use which memory type
- Check out Designing Memory Schemas (concepts apply to JS too)