I needed a better memory plugin for OpenClaw, so I made one – Gralkor (https://lnkd.in/gQyn2HTA)

I don’t mean better than the default, I mean better than the top OpenClaw memory plugins.

I started with the best open source, temporally-aware memory available – Graphiti (https://lnkd.in/gpRn5SXC). I’ve worked with many graph and vector memory systems and Graphiti still amazes me. Graphiti’s strengths are perfect for a long-running personal agent – I really appreciate Zep sharing it with us.

On top of Graphiti, I’ve put a lot of myself and the latest research into Gralkor.

I was quite surprised at how other memory plugins work. Typically they just capture individual question and answer pairs – not much to extract context from! What about ideas that come together slowly over the course of a whole conversation?

Instead, I learned heaps about OpenClaw’s hooks and figured out how to ingest whole episodes that make sense as tasks and conversations. More context, richer extraction, deeper understanding.

Did you know that most memory plugins for OpenClaw only remember dialog? When your agent tells you it did this or that last week, it doesn’t remember doing it – it remembers saying it did. Ask how and it will extrapolate confidently and the error compounds in memory. Your agents mostly don’t remember what they thought either, including how they solved their last problem – I sure couldn’t work under those conditions!

Instead, I built a distillation process to ingest thoughts and actions in context with dialog, tuning for the highest fidelity possible without crowding the graph with tool call parameters.

Gralkor provides a simple platform to experiment with memory consolidation and learning. You’ve got cron, just add Thinker CLI and Gralkor to start your quest for recursive self-improvement. We can learn together – ask me for my reflection cron! This is showing up in research a lot now as ERL.

Finally, custom ontologies! You can define your own entities and relationships, using a configuration scheme designed for accurate classification.

You could focus on standard domain language, or structure your agents memory around your model of the world. This is another one starting to come up in research.

So, enjoy Gralkor (https://lnkd.in/g79xCK2V). Star it, let me know what you think, tell your friends – all those nice things. Great trees need strong roots.