Skip to main content
Research
Engineering & AI16 min read

AI Agent Memory: Your Agents Forget Everything After Every Conversation

LLMs are stateless. Every conversation starts from zero. Your customer support agent cannot remember the user it spoke to five minutes ago. Your coding assistant loses all project context when the session ends. A new category of infrastructure — agent memory systems — is emerging to solve this. Mem0 raised $24M to build the memory layer. Letta models memory like an operating system. Zep tracks how facts change over time. AWS launched AgentCore Memory. This is the engineering guide to making your agents remember.

AuthorAbhishek Sharma· Fordel Studios

Ask your AI agent what it discussed with a customer yesterday. It cannot tell you. Ask it what tools it used to solve a similar problem last week. It has no idea. Ask it to remember a user’s preference from a previous session. It will hallucinate one or politely explain that it does not retain information between conversations.

This is not a bug. It is the fundamental architecture of large language models. LLMs are stateless. Every request arrives in a vacuum. The model receives a prompt, generates a response, and immediately forgets the entire interaction. There is no persistence layer. There is no session state. There is no memory. The next request starts from absolute zero.

For chatbots answering one-off questions, this is acceptable. For production AI agents — systems that manage customer relationships over months, debug codebases over weeks, or process insurance claims across multiple sessions — statelessness is a disqualifying limitation. An agent that cannot remember is an agent that cannot learn, cannot personalise, and cannot build on its own work.

26%accuracy improvement with dedicated memory layersMem0 vs OpenAI baseline on LOCOMO benchmark — with 90% fewer tokens consumed
$24Mraised by Mem0 for agent memory infrastructureLed by Basis Set Ventures, with backing from CEOs of Datadog, Supabase, PostHog, and GitHub
0%of LLMs retain state between API calls by defaultEvery conversation starts from zero unless you build the memory layer yourself
···

The Stateless Problem: Why Context Windows Are Not Memory

The instinctive response to "my agent forgets everything" is to stuff more context into the prompt. Bigger context windows. More conversation history. Longer system prompts. This is not memory. It is a workaround that breaks at scale.

Context windows have grown dramatically — Claude offers 1 million tokens, GPT-5.4 launched with 1 million, Gemini supports up to 2 million. But a large context window does not solve the memory problem. It compounds it.

Why Bigger Context Windows Do Not Equal Memory
  • Cost scales linearly with context length. Stuffing 200K tokens of conversation history into every request turns a $0.03 call into a $3.00 one.
  • Retrieval degrades with context size. Models perform worse at finding specific facts buried in long contexts — the "lost in the middle" problem is well-documented.
  • Context is ephemeral. It exists for one request. When the session ends, the context is gone. There is no persistence across sessions, users, or agent instances.
  • No prioritisation. A context window treats every token equally. It cannot distinguish between a critical user preference mentioned once and a throwaway comment repeated twenty times.
  • No consolidation. Memory requires merging, updating, and resolving contradictions over time. Context windows are append-only — they accumulate, they do not learn.

The distinction is fundamental. A context window is RAM — fast, volatile, gone when the process ends. Memory is disk — persistent, structured, available across sessions. Production agents need both, and until recently, the industry was trying to solve a disk problem by buying more RAM.

A Taxonomy of Agent Memory

Human memory is not a single system. Neither is agent memory. Production implementations require four distinct types, each serving a different purpose and operating at a different timescale.

Working Memory

The scratchpad for the current task. This is what lives inside the context window during a single interaction — the user’s current request, relevant tool outputs, intermediate reasoning steps. Working memory is fast and disposable. It exists for the duration of one agent run and is discarded. Every agent has this by default; it is the context window itself.

Episodic Memory

Records of past interactions. What happened, when, with whom, and what was the outcome. Episodic memory lets an agent say "the last time this user asked about billing, the issue was a duplicate charge — let me check for that first." AWS AgentCore Memory’s new episodic functionality captures structured episodes with context, reasoning, actions, and outcomes so agents can learn from past experiences. This is the memory type most agents lack entirely.

Semantic Memory

Long-term factual knowledge about users, entities, and relationships. User preferences, company policies, product specifications, team structures. Semantic memory is what makes an agent feel like it "knows" you — it remembers that you prefer dark mode, that your company uses Kubernetes, that your manager is Sarah. This is the memory type that personalisation depends on.

Procedural Memory

Knowledge of how to do things. Which tools to use for which tasks, which API endpoints require authentication, which workflows have worked before and which have failed. Procedural memory lets an agent get better at its job over time, avoiding past mistakes and reusing successful approaches. Claude Code’s auto-memory system is an example — it records build commands, debugging patterns, and architectural decisions so the next session starts with accumulated project knowledge.

Memory TypePersistenceExampleDefault in LLMs?
WorkingSingle requestCurrent conversation context, tool outputsYes — this is the context window
EpisodicCross-session"Last time this user called, we resolved a billing dispute"No — requires external storage
SemanticLong-term"This user prefers Python, works at Acme Corp, timezone EST"No — requires extraction and persistence
ProceduralLong-term"For this codebase, always run tests before deploying"No — requires learning from outcomes
···

The Framework Landscape: Who Is Building Agent Memory

A new category of infrastructure has emerged in the past twelve months. These are not prompt engineering tricks or RAG wrappers. They are purpose-built memory systems with extraction pipelines, graph stores, conflict resolution, and retrieval optimisation. The market split into three tiers: dedicated memory platforms, agent framework integrations, and cloud provider offerings.

Dedicated Memory Platforms

PlatformArchitectureKey DifferentiatorProduction Readiness
Mem0Dual-layer: vector store + knowledge graph (Mem0g). Extracts, consolidates, and retrieves memories automatically.26% accuracy gain over OpenAI on LOCOMO benchmark. 91% lower p95 latency. Graph variant excels at temporal reasoning (58% vs OpenAI’s 22% on time-sensitive queries).SOC 2 compliant. Managed cloud or self-hosted. $24M funded. Used by production teams at scale.
ZepTemporal knowledge graph. Tracks how facts change over time. Combines graph-based memory with vector search.Temporal reasoning is first-class. Answers queries like "how did this user’s goals change from Q1 to Q3?" Multi-hop entity relationship traversal.Low-latency managed service. Enterprise-focused. Strong in regulated industries.
Letta (ex-MemGPT)OS-inspired: main context as RAM, external storage as disk. Agent decides when to page information in and out.Memory is explicit and editable. Developers can inspect and modify memory blocks. Agent has autonomy over its own memory management.Stateful agent runtime. Open-source core. Managed cloud available.
CogneeKnowledge-graph-first. Structures unstructured content into semantic vectors + graph relationships before retrieval.Processes and structures data before storing it, not after. Integrates with Redis for fast vector recall and Kuzu for graph relationships.Open-source. Deployed in regulated industries. Redis partnership for performance.

Agent Framework Integrations

LangMem provides memory within the LangGraph ecosystem, storing memories as JSON documents with vector search. It is the simplest option for teams already using LangGraph, but lacks entity extraction, relationship modelling, and temporal reasoning. It stores memories as flat key-value items — functional, but architecturally limited compared to graph-based approaches.

CrewAI added short-term, long-term, and entity memory in its latest releases. AutoGen supports memory through plugin architectures. These framework-level integrations are convenient but shallow — they solve basic persistence without the extraction, consolidation, and conflict resolution that dedicated platforms provide.

Cloud Provider Offerings

AWS launched Amazon Bedrock AgentCore Memory with short-term and long-term memory types, plus a new episodic functionality that captures structured episodes (context, reasoning, actions, outcomes) so agents can learn from past interactions. As of March 2026, AgentCore Memory supports streaming notifications via Amazon Kinesis, eliminating the need to poll for memory changes. It is available in 15 AWS regions.

OpenAI’s ChatGPT memory feature persists across all conversations for Plus, Team, and Enterprise users, but the Agents SDK memory is more basic — it provides conversation history management without the extraction and consolidation that dedicated platforms offer. Google’s Gemini has similar session-level memory but no cross-session persistence in the API.

Anthropic took a different approach with Claude Code’s auto-memory system, which writes structured memory files (CLAUDE.md and MEMORY.md) that persist as part of the project filesystem. This is memory as code — version-controlled, inspectable, and editable by both the agent and the developer. It is the most transparent implementation, but it is file-based, not queryable at scale.

Graph Memory vs Flat Memory: The Architecture Decision That Matters

The most consequential architectural decision in agent memory is whether to store memories as flat vectors or as knowledge graphs. This is not an academic distinction. It determines what your agent can and cannot remember.

Flat vector memory stores each memory as an embedding in a vector database. Retrieval uses similarity search — "find memories similar to the current query." This works for simple factual recall: "What is this user’s preferred language?" The embedding for that memory will be similar to the query, and retrieval succeeds.

It fails for relational and temporal queries. "Who does this user report to?" requires traversing a relationship, not finding a similar vector. "How has this user’s sentiment changed over the last three months?" requires temporal ordering and trend analysis. "What are all the projects this user’s team is working on?" requires multi-hop entity traversal. Flat vectors cannot answer these questions.

Query TypeFlat Vector MemoryGraph Memory
Simple factual recallWorks well. "User prefers Python" is retrieved by similarity.Works well. Same recall, slightly more overhead.
Relational queriesFails. Cannot traverse "user → reports to → manager → team."Native. Relationships are first-class edges in the graph.
Temporal reasoningFails. No concept of time ordering or change tracking.Supported. Zep tracks fact changes over time. Mem0g scores 58% vs 22% on temporal queries.
Multi-hop reasoningFails. Each retrieval is independent; cannot chain lookups.Native. Graph traversal handles "find all projects by users in Sarah’s team."
Contradiction resolutionLast-write-wins or duplicates. No conflict detection.Graph structure makes contradictions visible and resolvable.

The practical recommendation: if your agent handles single-user, single-session interactions with simple personalisation needs, flat vector memory is sufficient and simpler to operate. If your agent manages multi-user relationships, tracks changes over time, or needs to reason about entity connections, graph memory is not optional — it is the only architecture that supports these queries.

Mem0’s graph-enhanced variant scores 58.13% on temporal reasoning tasks where OpenAI’s flat memory scores 21.71%. That is not a marginal improvement. It is the difference between an agent that remembers and an agent that guesses.
···

Production Failure Modes: How Memory Goes Wrong

Memory does not just fail by being absent. It fails in ways that are harder to detect and more damaging than statelessness. These are the failure modes that production teams discover after deployment.

Context Poisoning

Incorrect or hallucinated information enters the memory store. Because agents reuse and build upon stored context, these errors compound. An agent halluccinates a user preference in one session, stores it as memory, and then confidently acts on that fabricated preference in every subsequent session. The error becomes self-reinforcing. Detection requires validation at write time — verifying that memories extracted from conversations are grounded in what was actually said, not in what the model inferred or fabricated.

Context Distraction

The agent retrieves too much historical context and over-relies on past behaviour instead of reasoning fresh about the current request. If a user asked about billing three times in the past, the agent assumes every new question is about billing. The memory system floods the context window with historical patterns, and the model follows the pattern instead of reading the current input. This is the retrieval equivalent of overfitting.

Temporal Confusion

Memories stored without timestamps or version tracking become contradictory over time. A user’s company was "Acme Corp" in January and "Nexus Inc" in March because they changed jobs. Flat memory stores both as equally valid facts. The agent now confidently states the user works at Acme Corp — or Nexus Inc — depending on which embedding happens to be more similar to the current query. Without temporal awareness, the agent cannot distinguish between "current fact" and "historical fact."

Privacy Leakage

Multi-tenant memory systems that share storage across users risk leaking information between accounts. A fact extracted from User A’s conversation appears in User B’s context because the memory retrieval query was too broad. In regulated industries — healthcare, finance, insurance — this is not just a bug. It is a compliance violation. Memory systems must enforce strict tenant isolation at the storage and retrieval layers, not just at the application layer.

Memory System Failure Prevention Checklist
  • Validate memories at write time — check that extracted facts are grounded in actual conversation content, not model inference
  • Set retrieval limits — cap the number of memories injected into context per request (start with 5–10, measure impact)
  • Track memory provenance — every memory should link back to the conversation turn where it was extracted
  • Version facts with timestamps — mark when each memory was created and last confirmed, so the agent can prefer recent facts
  • Enforce tenant isolation at the storage layer — separate memory namespaces per user, per organisation, not just per query filter
  • Implement memory decay — reduce retrieval weight for older, unconfirmed memories rather than treating all memories as equally valid

Building a Production Memory Architecture

No single framework covers every memory requirement. Production systems layer multiple approaches. Here is the architecture that handles the full spectrum of memory needs.

Production Agent Memory Stack

01
Separate extraction from storage from retrieval

Memory extraction (deciding what to remember from a conversation), memory storage (persisting it durably), and memory retrieval (finding relevant memories for a new request) are three different problems. Do not conflate them. Use an LLM to extract structured facts from conversations. Store them in a dedicated memory backend (Mem0, Zep, or your own graph store). Retrieve using a combination of semantic similarity and explicit filters (user ID, recency, memory type).

02
Use graph storage for relational data, vector storage for factual recall

Not every memory needs a knowledge graph. User preferences, simple facts, and procedural knowledge work well as vector embeddings with metadata. Entity relationships, organisational structures, temporal fact tracking, and multi-hop queries require graph storage. Many production systems run both — Cognee’s architecture with Redis for vector memory and Kuzu for graph relationships is a proven pattern.

03
Implement memory consolidation as a background process

Raw conversation extracts are noisy. A consolidation pipeline runs periodically (hourly or daily) to merge duplicate memories, resolve contradictions using timestamps, promote frequently-accessed memories, and demote stale ones. This is analogous to garbage collection — without it, memory accumulates unboundedly and retrieval quality degrades.

04
Make memory inspectable and editable

Letta’s key insight: memory should be transparent. Developers must be able to inspect what the agent remembers, edit incorrect memories, and delete sensitive information. This is not just good engineering — it is a compliance requirement under GDPR (right to erasure), CCPA, and healthcare regulations. If you cannot show a regulator exactly what your agent "knows" about a patient, you have a problem.

05
Control retrieval volume aggressively

The default instinct is to retrieve as many relevant memories as possible. This causes context distraction. Production systems should retrieve the minimum memories needed, ranked by relevance and recency. Start with 5–10 memories per request. Measure the impact on response quality. Increase only if quality improves. More memory in context is not always better — the "lost in the middle" problem applies to memories just as it applies to documents.

···

Memory and Multi-Agent Systems

Memory becomes exponentially more complex in multi-agent architectures. When multiple agents collaborate on a task, they need shared memory (facts all agents can access), private memory (agent-specific knowledge), and coordination memory (awareness of what other agents have done and decided).

Consider a customer support system with a Triage agent, a Billing agent, and an Escalation agent. The Triage agent identifies the issue and routes it. The Billing agent resolves billing disputes. The Escalation agent handles cases that require human intervention. Each agent needs its own procedural memory (how to do its specific job), but all three need shared access to the customer’s interaction history and semantic memory (who the customer is, what their account looks like, what previous issues they have had).

Without shared memory, the Billing agent asks the customer to repeat information they already gave the Triage agent. Without private memory, the Escalation agent does not know that a specific type of dispute is better handled by offering a credit than by opening a formal investigation. Without coordination memory, two agents may attempt the same resolution simultaneously.

Multi-Agent Memory Requirements
  • Shared memory namespace: facts accessible to all agents in a workflow (customer profile, interaction history)
  • Private memory namespace: agent-specific knowledge (tooling preferences, learned heuristics)
  • Coordination memory: record of actions taken, decisions made, and handoff context between agents
  • Conflict resolution: when two agents update the same memory concurrently, which write wins?
  • Access control: not all agents should read all memories — a billing agent should not access medical records in a healthcare system
The memory problem for multi-agent systems is not "how do agents remember." It is "how do agents share what they know without sharing what they should not." This is an access control problem disguised as a memory problem.

The Economics of Agent Memory

Agent memory is not free, but the alternative — stuffing full conversation history into every context window — is dramatically more expensive.

Mem0’s benchmark data shows 90% token savings compared to raw conversation history injection. Instead of sending 50,000 tokens of chat history in every request, a memory system extracts the 20 relevant facts (perhaps 500 tokens) and injects only those. At current API pricing, this is the difference between $0.50 per request and $0.005 per request — a 100x cost reduction on the context portion of each call.

The memory platform itself has costs: Mem0’s managed cloud starts at usage-based pricing, Zep offers enterprise tiers, and self-hosted options require compute for the extraction pipeline and storage for the memory backend. But these costs are a fraction of the token savings they enable.

90%token savings with structured memory vs raw historySource: Mem0 LOCOMO benchmark — extract and inject relevant facts instead of full conversation logs
100xpotential cost reduction per request on context tokens500 tokens of structured memories vs 50,000 tokens of raw conversation history
···

What to Build This Week

If your agents currently operate statelessly, here is the minimum viable memory implementation that delivers immediate value.

Minimum Viable Agent Memory

01
Extract and persist user facts after each conversation

After each agent interaction, run a simple extraction prompt: "What new facts did we learn about this user?" Store the results as key-value pairs with timestamps and the source conversation ID. This alone — even without a dedicated memory platform — gives your agent basic cross-session awareness. A JSON file per user is sufficient to start.

02
Inject relevant memories at the start of each request

Before the agent processes a new request, retrieve the top 5–10 most relevant memories for the current user. Inject them into the system prompt as structured context. Use semantic similarity if you have embeddings, or simple recency if you do not. Even injecting "User prefers email over phone. User is on the Enterprise plan. Last issue was about API rate limits." transforms the interaction.

03
Add a memory feedback loop

Let the agent mark memories as confirmed (the user validated this fact) or contradicted (new information overrides this). Over time, confirmed memories rise in retrieval ranking and contradicted memories are retired. This prevents context poisoning without requiring a complex consolidation pipeline.

04
Evaluate whether you need a dedicated platform

If your memory needs are simple (user preferences, basic interaction history), a vector store with metadata filtering is sufficient. If you need temporal reasoning, relationship traversal, or multi-agent shared memory, evaluate Mem0, Zep, or Letta. The dedicated platforms pay for themselves when memory complexity exceeds what a simple vector store can handle — which typically happens around 10,000 active users or when you deploy your second agent.

···

Where Fordel Builds

We build production AI agent systems for SaaS, finance, healthcare, and insurance clients. Memory is not an afterthought in our architectures — it is a core infrastructure decision made before the first agent is deployed. We select and implement the right memory architecture based on each client’s requirements: graph-based memory for relationship-heavy domains like healthcare and insurance, vector-based memory for high-volume customer interactions, and hybrid approaches for multi-agent systems that need both.

If your agents are starting every conversation from scratch, losing customer context between sessions, or burning tokens by stuffing full histories into every request, we can design and implement the memory layer that makes your agents actually remember. Reach out.

Keep Exploring

Related services, agents, and capabilities

Services
01
AI Agent DevelopmentAgents that ship to production — not just pass a demo.
02
AI Product StrategyAvoid the AI wrapper trap. Find where AI creates a defensible moat.
03
Cloud Architecture & DevOpsInfrastructure that runs AI workloads without surprising your budget.
04
AI Cost OptimizationThe inference cost crisis — audited and addressed.
05
Full-Stack EngineeringAI-native product engineering — the 100x narrative meets production reality.
Capabilities
06
AI Agent DevelopmentAutonomous systems that act, not just answer
07
AI/ML IntegrationAI that works in production, not just in notebooks
08
Backend DevelopmentThe infrastructure that makes AI-powered systems reliable
09
Cloud Infrastructure & DevOpsInfrastructure that scales with AI workloads
Industries
10
SaaSThe SaaSocalypse narrative is real and it is not done. Cursor with Claude built Anysphere into a $2.5B company selling to developers who used to pay for multiple separate tools. Bolt, Lovable, and Replit Agent are letting non-engineers ship MVPs in hours. Zero-seat software is emerging — AI agents as the only users of your API, with no human seat count to price against. The "wrapper problem" is killing thin AI wrappers with no moat. Single-person billion-dollar companies are no longer theoretical. Vertical AI is eating horizontal SaaS in category after category. And the great SaaS repricing is underway: customers are refusing to renew at legacy prices when AI does the same job for less.
11
FinanceAI-first neobanks are emerging. Bloomberg GPT and domain-specific financial LLMs are in production. Upstart and Zest AI are disrupting FICO-based credit scoring. Deepfake voice fraud is hitting bank call centers at scale. The RegTech market is heading toward $20B+ as compliance automation replaces compliance headcount. JP Morgan's LOXM and Goldman's AI initiatives are setting expectations for what institutional-grade financial AI looks like — and the compliance infrastructure required to deploy it.
12
HealthcareAmbient AI scribes are in production at health systems across the country — Abridge raised $150M, Nuance DAX is embedded in Epic, and physicians are actually adopting these tools because they remove documentation burden rather than adding to it. The prior authorization automation wars are heating up with CMS mandating FHIR APIs. AlphaFold and Recursion Pharma are rewriting drug discovery timelines. The engineering challenge is not AI capability — it is building systems that are safe, explainable, and HIPAA-compliant at the same time.
13
InsuranceInsurTech 2.0 is collapsing — most of the startups that raised on "AI-first insurance" burned through capital and failed or are being quietly absorbed by incumbents. What is emerging from the wreckage is more interesting: parametric AI underwriting, embedded insurance via API, and agent-first claims processing that handles FNOL to payment without human intervention. The carriers that win will be those that treat AI governance as an engineering requirement under the NAIC FACTS framework, not a compliance afterthought.