Gralkor — April 8, 2026

Gralkor

I needed a better memory plugin for OpenClaw, so I made one – Gralkor (https://lnkd.in/gQyn2HTA)

I don’t mean better than the default, I mean better than the top OpenClaw memory plugins.

I started with the best open source, temporally-aware memory available – Graphiti (https://lnkd.in/gpRn5SXC). I’ve worked with many graph and vector memory systems and Graphiti still amazes me. Graphiti’s strengths are perfect for a long-running personal agent – I really appreciate Zep sharing it with us.

On top of Graphiti, I’ve put a lot of myself and the latest research into Gralkor.

I was quite surprised at how other memory plugins work. Typically they just capture individual question and answer pairs – not much to extract context from! What about ideas that come together slowly over the course of a whole conversation?

Instead, I learned heaps about OpenClaw’s hooks and figured out how to ingest whole episodes that make sense as tasks and conversations. More context, richer extraction, deeper understanding.

Did you know that most memory plugins for OpenClaw only remember dialog? When your agent tells you it did this or that last week, it doesn’t remember doing it – it remembers saying it did. Ask how and it will extrapolate confidently and the error compounds in memory. Your agents mostly don’t remember what they thought either, including how they solved their last problem – I sure couldn’t work under those conditions!

Instead, I built a distillation process to ingest thoughts and actions in context with dialog, tuning for the highest fidelity possible without crowding the graph with tool call parameters.

Gralkor provides a simple platform to experiment with memory consolidation and learning. You’ve got cron, just add Thinker CLI and Gralkor to start your quest for recursive self-improvement. We can learn together – ask me for my reflection cron! This is showing up in research a lot now as ERL.

Finally, custom ontologies! You can define your own entities and relationships, using a configuration scheme designed for accurate classification.

You could focus on standard domain language, or structure your agents memory around your model of the world. This is another one starting to come up in research.

So, enjoy Gralkor (https://lnkd.in/g79xCK2V). Star it, let me know what you think, tell your friends – all those nice things. Great trees need strong roots.

The easiest optimisation is going faster —

The easiest optimisation is going faster

I’ve been baking some favoured ways of working into my coding agent as project DX, OpenCode config, skills and rules for a little while. I can already work much faster this way than I could by myself, or with Antigravity. Reliability is high, output is good, and I’m working at a level of abstraction that I really enjoy. My mentality therein is something like ADDD https://lnkd.in/gsAizijA (thanks Obie Fernandez for the link).

I also have fine-grained control over planning and behaviour using test-trees as contracts. The codebase is well controlled, tested, and documented, benefitting from good adherence to my preferred practices.

Once people get to some stage they are happy with, I see they often optimise by running Ralph loops overnight. Maybe it’s not for me. I don’t want to work asynchronously just because the agent is too slow or unreliable, I want the agent to be faster. I want to go as fast as I can think at my current level of abstraction, which I am really enjoying.

Parallelisation also doesn’t sound great – all the old points about cycle time over throughput still apply. I don’t want to architect for easier parallel dev (that’s a compromise we’ve made too much in the past already), or have to integrate a bunch of work trees, or increase my mental load with things that are lower priority anyway, or start new work before learning from the last piece of work.

For now I can think way faster than GPT 5.2 codex high can work (on this project, with my workflow), so I’m on the hunt for more tokens per second. I’m not the first – I’m waiting for faster coding plans to drop and reportedly they will sell out in minutes. When I can continue working this way at a few thousand tokens per second, it will be an absolute delight.

Once you’ve got it working, the easiest optimisation is probably going faster.