Memory Management
Memoer provides powerful memory management capabilities for Large Language Models (LLMs), enabling them to maintain context in conversations.
Memory Strategies
Memoer offers several memory management strategies to optimize your LLM application:
Sliding Window Strategy
The sliding window strategy keeps only the most recent messages in the conversation:
import { memoer, MemoryConfig, ConversationStrategy } from "memoer";
// Create a memory configuration with sliding window
const memoryConfig = {
id: "conversation-1",
systemMessage: {
role: "system",
content: "You are a helpful assistant."
},
managers: {
conversation: {
strategy: ConversationStrategy.SLIDING_WINDOW,
slidingWindowSize: 10 // Keep only the 10 most recent messages
}
}
};
const memory = memoer.createMemory(memoryConfig);
This strategy is perfect for applications where only recent context matters, helping you manage token usage while maintaining relevant conversation context.
Token Buffer Strategy
The token buffer strategy limits conversations based on token count rather than message count:
import { memoer, MemoryConfig, ConversationStrategy } from "memoer";
// Create a memory configuration with token buffer
const memoryConfig = {
id: "conversation-1",
systemMessage: {
role: "system",
content: "You are a helpful assistant."
},
managers: {
conversation: {
strategy: ConversationStrategy.TOKEN_BUFFER,
maxTokens: 4000 // Maximum token count to maintain
}
}
};
const memory = memoer.createMemory(memoryConfig);
This ensures you stay within model token limits while maximizing the conversation context.
Summary Strategy
For long-running conversations, Memoer provides a summarization strategy that keeps a summary of previous interactions:
import { memoer, MemoryConfig, ConversationStrategy } from "memoer";
// Create a memory configuration with summarization
const memoryConfig = {
id: "conversation-1",
systemMessage: {
role: "system",
content: "You are a helpful assistant."
},
managers: {
conversation: {
strategy: ConversationStrategy.SUMMARY,
summaryInterval: 10 // Generate summary every 10 messages
}
}
};
const memory = memoer.createMemory(memoryConfig);
This approach maintains the essence of longer conversations without consuming excessive tokens.