Memory API Reference
All classes below are importable from anchor directly.
For the conceptual guide, see Memory Guide .
Coordinates conversation memory and persistent facts.
MemoryManager(
conversation_tokens: int = 4096 ,
tokenizer: Tokenizer | None = None ,
on_evict: Callable[[list[ConversationTurn]], None ] | None = None ,
persistent_store: MemoryEntryStore | None = None ,
conversation_memory: ConversationMemory | None = None ,
)
Parameter Type Default Description conversation_tokensint4096Token budget for the default sliding window. Ignored when conversation_memory is provided. tokenizerTokenizer | NoneNoneCustom tokenizer. Falls back to the built-in counter. on_evictCallable | NoneNoneCallback for evicted turns. Ignored when conversation_memory is provided. persistent_storeMemoryEntryStore | NoneNoneStore for long-term facts. conversation_memoryConversationMemory | NoneNoneCustom conversation backend. Overrides the default sliding window.
Property Type Description conversationConversationMemoryThe underlying conversation memory instance. conversation_typestrReturns "sliding_window", "summary_buffer", or the class name. persistent_storeMemoryEntryStore | NoneThe persistent store, if configured.
Method Returns Description add_user_message(content)NoneAdd a user turn. add_assistant_message(content)NoneAdd an assistant turn. add_system_message(content)NoneAdd a system turn. add_tool_message(content)NoneAdd a tool turn. add_fact(content, tags, memory_type, metadata)MemoryEntryStore a persistent fact with content-hash deduplication. Raises StorageError without a store. get_relevant_facts(query, top_k=5)list[MemoryEntry]Search persistent store. Returns [] without a store. get_all_facts()list[MemoryEntry]Return all persistent entries. delete_fact(entry_id)boolDelete a fact by ID. Returns False if not found. update_fact(entry_id, content)MemoryEntry | NoneUpdate fact content. Returns None if not found. get_context_items(priority=7)list[ContextItem]Assemble context items. Facts at priority 8, conversation at given priority. clear()NoneClear conversation history and persistent store.
Token-aware sliding window. Thread-safe.
SlidingWindowMemory(
max_tokens: int = 4096 ,
tokenizer: Tokenizer | None = None ,
on_evict: Callable[[list[ConversationTurn]], None ] | None = None ,
eviction_policy: EvictionPolicy | None = None ,
recency_scorer: RecencyScorer | None = None ,
)
Parameter Type Default Description max_tokensint4096Maximum token budget. Must be positive. tokenizerTokenizer | NoneNoneCustom tokenizer. on_evictCallable | NoneNoneCalled with evicted turns list. eviction_policyEvictionPolicy | NoneNoneCustom eviction strategy. Default is FIFO. recency_scorerRecencyScorer | NoneNoneCustom recency scoring. Default is linear 0.5--1.0.
Property Type Description turnslist[ConversationTurn]Current turns (copy). total_tokensintTokens currently used. max_tokensintThe configured budget.
Method Returns Description add_turn(role, content, **metadata)ConversationTurnAdd a turn, evicting old turns if needed. Truncates if a single turn exceeds budget. to_context_items(priority=7)list[ContextItem]Convert to context items with recency-weighted scores. Role in metadata, not content. clear()NoneRemove all turns and reset token count.
Two-tier memory: recent turns verbatim plus a running summary. Exactly one compaction function required.
SummaryBufferMemory(
max_tokens: int ,
compact_fn: Callable[[list[ConversationTurn]], str ] | None = None ,
progressive_compact_fn: Callable[[list[ConversationTurn], str | None ], str ] | None = None ,
tokenizer: Tokenizer | None = None ,
summary_priority: int = 6 ,
)
Parameter Type Default Description max_tokensint(required) Token budget for the internal sliding window. compact_fnCallable | NoneNoneSimple compaction: receives evicted turns, returns summary. progressive_compact_fnCallable | NoneNoneProgressive: receives evicted turns and previous summary. tokenizerTokenizer | NoneNoneCustom tokenizer. summary_priorityint6Priority for the summary context item.
[!CAUTION]
Providing both or neither compaction function raises ValueError.| Property | Type | Description |
|---|---|---|
| summary | str \| None | Running summary, or None before first eviction. |
| summary_tokens | int | Token count of the current summary. |
| turns | list[ConversationTurn] | Live turns in the sliding window. |
| total_tokens | int | Tokens in the live window (excludes summary). |
Method Returns Description add_turn(turn)NoneAdd a pre-built ConversationTurn. add_message(role, content, **metadata)ConversationTurnAdd a message by role and content. to_context_items(priority=7)list[ContextItem]Summary item (if present) followed by live window items. clear()NoneClear both window and summary.
In-memory directed graph for entity-relationship tracking.
Property Type Description entitieslist[str]All entity IDs. relationshipslist[tuple[str, str, str]]All edges as (source, relation, target).
Method Returns Description add_entity(entity_id, metadata=None)NoneAdd or update an entity node. add_relationship(source, relation, target)NoneAdd a directed edge. Auto-creates missing nodes. link_memory(entity_id, memory_id)NoneLink a MemoryEntry.id to an entity. Raises KeyError if missing. get_related_entities(entity_id, max_depth=2)list[str]BFS traversal, both directions. Starting entity excluded. get_memory_ids_for_entity(entity_id)list[str]Memory IDs linked to one entity. get_related_memory_ids(entity_id, max_depth=2)list[str]Deduplicated memory IDs from entity and neighbors. get_entity_metadata(entity_id)dict[str, Any]Copy of entity metadata. Raises KeyError if missing. remove_entity(entity_id)NoneRemove entity, edges, and memory links. clear()NoneRemove everything.
Evicts oldest turns first. Matches the built-in default.
Method Returns Description select_for_eviction(turns, tokens_to_free)list[int]Indices of oldest turns until enough tokens freed.
Evicts turns with the lowest importance scores first.
ImportanceEviction(importance_fn: Callable[[ConversationTurn], float ])
Parameter Type Description importance_fnCallable[[ConversationTurn], float]Scoring function. Lower scores evicted first.
Method Returns Description select_for_eviction(turns, tokens_to_free)list[int]Indices of least-important turns until enough tokens freed.
Evicts user+assistant turn pairs together. Pairs evicted oldest-first.
Method Returns Description select_for_eviction(turns, tokens_to_free)list[int]Indices of oldest turn-pairs until enough tokens freed.
Exponential recency scoring: (e^(rate * x) - 1) / (e^(rate) - 1).
ExponentialRecencyScorer(decay_rate: float = 2.0 )
Parameter Type Default Description decay_ratefloat2.0Controls curve steepness. Must be positive.
Method Returns Description score(index, total)floatScore in [0.0, 1.0]. index=0 is oldest.
Linear recency scoring from min_score to 1.0.
LinearRecencyScorer(min_score: float = 0.5 )
Parameter Type Default Description min_scorefloat0.5Score for oldest turn. Must be in [0.0, 1.0).
Method Returns Description score(index, total)floatScore in [min_score, 1.0].
Ebbinghaus forgetting curve: R = e^(-t/S) where S = base_strength + access_count * reinforcement_factor.
EbbinghausDecay(base_strength: float = 1.0 , reinforcement_factor: float = 0.5 )
Parameter Type Default Description base_strengthfloat1.0Initial memory strength in hours. Must be positive. reinforcement_factorfloat0.5Strength added per access. Must be non-negative.
Method Returns Description compute_retention(entry)floatRetention in [0.0, 1.0] based on time and access count.
Linear decay from 1.0 to 0.0 over twice the half-life.
LinearDecay(half_life_hours: float = 168.0 )
Parameter Type Default Description half_life_hoursfloat168.0Hours until retention reaches 0.5 (default 7 days). Must be positive.
Method Returns Description compute_retention(entry)floatRetention in [0.0, 1.0]. 0.5 at half-life, 0.0 at twice half-life.
Merges similar memories via embedding cosine similarity and content-hash deduplication.
SimilarityConsolidator(
embed_fn: Callable[[ str ], list[ float ]],
similarity_threshold: float = 0.85 ,
max_cache_size: int = 1000 ,
)
Parameter Type Default Description embed_fnCallable[[str], list[float]](required) Embedding function. The library never calls an LLM. similarity_thresholdfloat0.85Cosine similarity above which entries are merged. In [0.0, 1.0]. max_cache_sizeint1000Max cached embeddings before cache is cleared.
Method Returns Description consolidate(new_entries, existing)list[tuple[MemoryOperation, MemoryEntry | None]]ADD (new), UPDATE (merged), or NONE (duplicate) for each entry.
[!NOTE]
Merged entries keep the longer content, combine tags/links/metadata,
increment access_count, and use the higher relevance_score.
Prunes expired and decayed entries from a GarbageCollectableStore.
MemoryGarbageCollector(
store: GarbageCollectableStore,
decay: MemoryDecay | None = None ,
callbacks: list[MemoryCallback] | None = None ,
)
Parameter Type Default Description storeGarbageCollectableStore(required) Store to prune. Must support list_all_unfiltered(). decayMemoryDecay | NoneNoneDecay function. Without it, only expiry pruning runs. callbackslist[MemoryCallback] | NoneNoneCallbacks notified of pruning events.
Method Returns Description collect(retention_threshold=0.1, dry_run=False)GCStatsFull GC (expiry + decay). collect_expired(dry_run=False)list[MemoryEntry]Remove only expired entries. collect_decayed(retention_threshold=0.1, dry_run=False)list[MemoryEntry]Remove only decayed entries. Raises ValueError without decay.
Statistics from a garbage collection run.
GCStats(expired_pruned: int , decayed_pruned: int , total_remaining: int , dry_run: bool )
Attribute Type Description expired_prunedintEntries pruned due to expiration. decayed_prunedintEntries pruned due to low retention. total_remainingintEntries remaining after collection. dry_runboolWhether this was a dry run. total_prunedint(property) Sum of expired + decayed.
Protocol for observing memory lifecycle events. All methods default to no-ops.
class MemoryCallback ( Protocol ):
def on_eviction (self, turns: list[ConversationTurn], remaining_tokens: int ) -> None : ...
def on_compaction (self, evicted_turns: list[ConversationTurn], summary: str , previous_summary: str | None ) -> None : ...
def on_extraction (self, turns: list[ConversationTurn], entries: list[MemoryEntry]) -> None : ...
def on_consolidation (self, action: str , new_entry: MemoryEntry | None , existing_entry: MemoryEntry | None ) -> None : ...
def on_decay_prune (self, pruned_entries: list[MemoryEntry], threshold: float ) -> None : ...
def on_expiry_prune (self, pruned_entries: list[MemoryEntry]) -> None : ...
Method Description on_evictionTurns evicted from sliding window. on_compactionEvicted turns compacted into summary. on_extractionMemories extracted from conversation. on_consolidationConsolidation decision (add/update/delete/none). on_decay_pruneEntries pruned by low retention score. on_expiry_pruneExpired entries removed.
Four-tier progressive summarization memory. Satisfies the ConversationMemory protocol.
Pluggable into MemoryManager and ContextPipeline.
ProgressiveSummarizationMemory(
max_tokens: int = 8192 ,
llm: LLMProvider | str = "anthropic/claude-haiku-4-5-20251001" ,
tier_config: list[TierConfig] | None = None ,
max_facts: int = 50 ,
fact_token_budget: int = 500 ,
tokenizer: Tokenizer | None = None ,
callbacks: list[ProgressiveSummarizationCallback] | None = None ,
)
Parameter Type Default Description max_tokensint8192Total token budget across all tiers. Must be positive. llmLLMProvider | str"anthropic/claude-haiku-4-5-20251001"LLM for summarization. Accepts a provider instance or a "provider/model" string. tier_configlist[TierConfig] | NoneNoneCustom tier allocation. Defaults to 4 tiers (verbatim → bullet → paragraph → archive). max_factsint50Maximum extracted facts to retain. fact_token_budgetint500Token budget reserved for facts. tokenizerTokenizer | NoneNoneCustom tokenizer. Falls back to the built-in counter. callbackslist | NoneNoneLifecycle callbacks for summarization events.
Method Returns Description add_turn(role, content, **metadata)ConversationTurnAdd a turn, triggering tier promotion when budget is exceeded. to_context_items(priority=7)list[ContextItem]Assemble context items from all tiers and facts. clear()NoneClear all tiers and facts.
Handles LLM calls for summarization and fact extraction. Used internally by ProgressiveSummarizationMemory.
TierCompactor(
llm: LLMProvider,
tokenizer: Tokenizer | None = None ,
)
Parameter Type Default Description llmLLMProvider(required) LLM provider for generating summaries. tokenizerTokenizer | NoneNoneCustom tokenizer. Falls back to the built-in counter.
Method Returns Description summarize(content, target_tier, target_tokens, existing_summary=None)strSynchronously summarize content for a target tier. Falls back to truncation on LLM failure. extract_facts(content)list[str]Extract key facts from content via LLM. Returns empty list on failure.
Delegates memory extraction to a user-provided function.
CallbackExtractor(
extract_fn: Callable[[list[ConversationTurn]], list[dict[ str , Any]]],
default_type: MemoryType = MemoryType. SEMANTIC ,
)
Parameter Type Default Description extract_fnCallable(required) Receives turns, returns dicts with required "content" key. Optional: "tags", "memory_type", "metadata", "relevance_score", "user_id", "session_id". default_typeMemoryTypeMemoryType.SEMANTICDefault type when not specified in the dict.
Method Returns Description extract(turns)list[MemoryEntry]Build entries from user function output. Raises ValueError if "content" missing.
Example
from anchor import CallbackExtractor, MemoryType
from anchor.models.memory import ConversationTurn
def my_extractor (turns):
return [{ "content" : f "Discussed: { turns[ 0 ].content[: 40 ] } " , "tags" : [ "topic" ]}]
extractor = CallbackExtractor( extract_fn = my_extractor)
turns = [ConversationTurn( role = "user" , content = "Tell me about Python async" )]
entries = extractor.extract(turns)
print (entries[ 0 ].content) # "Discussed: Tell me about Python async"
print (entries[ 0 ].memory_type) # MemoryType.SEMANTIC