Conversation Management
Memoer makes it easy to manage conversations with LLMs by providing a robust conversation API.
Adding Messages to a Conversation
You can easily add messages to a conversation:
import { memoer, MemoryConfig } from "memoer";
// Initialize memory
const memory = memoer.createMemory({
id: "conversation-1",
systemMessage: {
role: "system",
content: "You are a helpful assistant."
}
});
// Add user message to conversation
memory.conversation.add({
role: "user",
content: "Hello, how are you today?"
});
// Add assistant response
memory.conversation.add({
role: "assistant",
content: "I'm doing well, thank you! How can I help you today?"
});
Getting Conversation Context
Retrieve the optimized conversation context for sending to an LLM:
// Get the current context (filtered by your chosen memory strategy)
const context = await memory.conversation.getContext();
// Use the context with your LLM
const response = await yourLLMProvider.generate({
messages: context
});
Retrieving Full Conversation History
You can also retrieve the complete conversation history:
// Get complete conversation history regardless of memory strategy
const fullHistory = await memory.conversation.getFullContext();
Integration with Vercel AI SDK
Memoer integrates seamlessly with Vercel's AI SDK:
import { StreamingTextResponse, Message } from "ai";
import { memoer, MemoryConfig } from "memoer";
// Initialize memory
const memory = memoer.createMemory({
id: "conversation-1",
systemMessage: {
role: "system",
content: "You are a helpful assistant."
}
});
export async function POST(req) {
const { messages } = await req.json();
// Add the latest user message
const latestMessage = messages[messages.length - 1];
memory.conversation.add(latestMessage);
// Get the conversation context from memory
const context = await memory.conversation.getContext();
// Send to LLM using Vercel AI SDK
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: context,
stream: true
});
// Add the assistant's response to memory
const assistantMessage = {
role: "assistant",
content: response.choices[0].message.content
};
memory.conversation.add(assistantMessage);
// Return the streaming response
return new StreamingTextResponse(response);
}
This integration makes it simple to build robust, context-aware chatbots while efficiently managing token usage.