Memori Labs today announced the launch of its new agent-native memory infrastructure, enabling agents to create structured, long-term memory directly from the agent trace — including execution paths, tool results, workflow steps, outcomes, and decision-making logic. This allows memory to be generated not only from what an agent says, but from what an agent actually does.
SAN FRANCISCO, May 7, 2026 /PRNewswire-±¬ÁϹ«Éçapp/ -- Memori Labs today announced the launch of its new agent-native memory infrastructure, advancing how AI agents retain, structure, and reuse knowledge over time.
Unlike traditional memory systems that rely primarily on long-form natural language conversation history, Memori enables agents to automatically create structured, long-term memory directly from the agent trace — including execution paths, tool results, workflow steps, outcomes, and decision-making logic.
This allows memory to be generated not only from what an agent says, but from what an agent actually does.
By structuring memory from agent execution, Memori enables agents to learn from completed tasks, avoid repeating prior mistakes, retrieve relevant operational context, and become more efficient over time. The result is a more durable memory layer that can materially reduce inference spend while helping agents optimize as they complete more workflows.
Memori's approach is supported by leading performance on the LoCoMo long-conversation memory benchmark, where Memori has demonstrated industry-leading recall accuracy and materially better cost efficiency compared with alternative memory systems. These results validate Memori's broader thesis: production agents need structured, selective, and persistent memory rather than repeatedly relying on full-context prompting or unstructured conversation history.
"By automating the creation of structured memories from the agent trace instead of limiting the memory knowledge graph to what agents say, we are capturing tool calls, decisions, workflow steps, outcomes, and other trace events that give the agent a complete picture of its prior activities," said Adam B. Struck, CEO and Co-Founder of Memori Labs. "With Memori, agents can remember and learn from every interaction and execution path - not just the natural language conversation that would otherwise be preserved."
The new agent-native version of Memori is currently available within the OpenClaw harness through version 0.0.10 of the Memori Labs OpenClaw plugin. Memori Labs also plans to bring this agent-native memory capability to Hermes Agent and other harness infrastructure, including Claude, Cursor, and Codex via MCP.
The new OpenClaw plugin introduces these features:
- Structured, persistent memory for AI agents — Memori replaces flat markdown memory files with a structured knowledge graph that captures facts, decisions, outcomes, and patterns across every session — without bloating the prompt.
- Grounded in what agents actually do, not just what they say — Memori captures tool calls, execution traces, and real-time agent decisions alongside conversation, giving agents a fuller picture of prior task execution.
- Agent-controlled recall — Agents decide when and what to retrieve, scoped precisely by project, session, entity, or time range — eliminating irrelevant context and cross-project noise.
- Automatic memory building, zero latency impact — Memory is structured and updated asynchronously after each interaction, so it never slows the agent's response.
- Smarter daily briefs — Memori generates structured daily briefings built from execution traces and structured memory — covering priorities, risks, active goals, open loops, and known failure patterns — far beyond a simple conversation recap.
- Built for multi-user, multi-project environments — Memory is fully scoped and isolated by project, process, session, and entity, preventing data bleed across users and contexts.
- Production-ready observability — Full visibility into memory creation, recall activity, retrieval performance, and quota usage via Memori Cloud.
The setup process takes less than two minutes for existing OpenClaw users.
Memori OpenClaw plugin v0.0.11 is now available to install. It requires OpenClaw v2026.3.2 or later.
Availability
The Memori OpenClaw plugin is available now via `openclaw plugins install @memorilabs/openclaw-memori`. Developers can sign up for a free API key at and view setup documentation at .
Industry Leading Benchmarks
Memori Labs Benchmark results can be downloaded and verified at .
About Memori Labs
Memori Labs is an agent-native memory platform built for production AI systems. Unlike conversational memory wrappers or vector retrieval layers, Memori structures memory from both conversation and agent execution — turning tool calls, decisions, and workflow traces into persistent, queryable state. Memori's benchmark results reflect the approach: 81.95% accuracy on LoCoMo using only 1,294 tokens per query, roughly 5% of full-context cost, saving users more than 95% on inference costs. The open-source project has grown to more than 14,000 GitHub stars, signaling strong developer pull. Bessemer Venture Partners has identified memory and context management as a key part of the emerging AI infrastructure harness layer, citing Memori as one of the category leaders.
Media Contact
Adam B. Struck, Memori Labs, 1 5612890486, [email protected],
SOURCE Memori Labs
Share this article