Build for Understanding — May 14, 2026

Build for Understanding

You can’t help people you don’t understand. No understanding + no trust = undifferentiated = slop.

“Companion” seems to be a dirty word, but if – like me – you believe AI can help people with real problems, then you need to look past the word. Many agents will rely on deep user knowledge and trust – how will you build it?

To help with code, doesn’t your coding agent need code context? It builds and leverages understanding far beyond your prompting. Without knowing about you and your life, what is AI supposed to do when you ask it why your wife is treating you like that, or how much you should worry about that mole, or how to be more present for your kids? What the heck does _it_ know?

No understanding + no trust = undifferentiated = slop. Build connections strong enough for the AI to learn from people in real time, then deepen that understanding asynchronously_. Use that understanding to help people, then stay engaged so you can learn from the result and help better next time.

I eschewed big parts of the AI hype cycle because I care more about finding ways to help _normal people_ (not weirdos like us). One important lesson: Memory is almost *always* important. You could be accumulating medical records over years and re-examining them after every quick health catchup, or giving good relationship advice because you know my girlfriend’s attachment style, or whatever, but you can’t help people you don’t understand.

I’m building for _understanding_.

Better Context for Human Turns — May 12, 2026

Better Context for Human Turns

“Now there’s too much code to review!” (lots of this going around in product companies)

You can increase autonomy and stop reviewing – get out of the loop with software factories. That tends towards slop – hard to sell, depends on your use case.

Alternatively, reframe: AI dev creates too much much *change* to review. It’s not only code, but behaviour, conceptual models, UX, architecture, dependencies – many things are changing and code is only one possible representation.

There are better change representations we could review to keep our mental models up to date, and quickly identify problems _and opportunities_. As an aspirational example of non-code representations of software, look at what the great ⚗️ Josh Price is doing to introspect built Ash applications with Clarity.

Why don’t we start producing change representations of this quality in our AI dev? I am using test trees and hexagonal architecture consistently at the moment, what representation could I be reviewing instead of the test-trees I use now? This is not a rhetorical question – if you know, tell me.

Developing ways to represent changes from unsupervised agentic work _is extremely high leverage and badly neglected_ – what a waste! Is anybody working on this, or wants to work on it?

Generated example below.

Cron + Thinker CLI — May 8, 2026

Cron + Thinker CLI

I still can’t believe how well this works. My research flow produces great output with a cheap model _and thinking turned off_.

It draws on my recent Git history (lots more could be pulled in), reflects on my work, and does repeated cycles of research and reflection to identify and distill new insights on how to improve my current work.

I regularly consider and apply the latest research to my AI projects, so this is really helpful for me.

https://github.com/elimydlarz/thinker-cli

Good Advice Wasted on a Coding Agent — May 7, 2026
Let’s Not Jump on a Call — May 6, 2026

Let’s Not Jump on a Call

Sometimes I want to think deeply about something, write thoughtfully, have the other party consider my point of view, reflect on their own perspective, and write a meaningful reply.

Those conversations align our thinking and advance our work _conceptually_ and strategically so that day to day decisions fall out naturally.

But when I write deeply, most people want to jump on a call right away, resulting in shallow conversation and more calls. We might make some progress, but the deepest questions at the root of our work remain unanswered. Calls are great, but they don’t replace _thinking_.

This is a cry for deep thinkers and people who _like reading and writing_ – let’s be friends!

PMs are Getting the Wrong Impression — April 30, 2026

PMs are Getting the Wrong Impression

Product-focused leaders are trying out AI on their own codebases and getting exactly the wrong impression.

Jack uses his orgs codebase with AI: “Wow, this is going so well, I can add features so easily! AI is much better than our engineers”

Nick does the same: “Oh no, AI keeps breaking things – I’m glad we have such great engineers, they are even better than AI”

They are both completely wrong.

Coding agents benefit a lot from, good developer experience, consistent architecture, and particularly from strong testing. They are OK at the moment. If you give them a strong platform, they can build on top of it really well without direction. If you give them a bad platform then they will struggle, just like a new team member would.

Jack had a great experience with AI because his team is so diligent in protecting his orgs ability to innovate. AI is faster than the team because it wants to make Jack happy this session – ideally this message. Jack’s coding agent is mangling his codebase while showing him new features that only look like they work, but he misinterprets his experience in favour of the AI – now brittleness increases every session until he can’t deliver anymore.

Nick had a bad experience because his team is incompetent or rushing. Their software is already brittle, but they can operate in it for now with tribal knowledge. AI can’t – it’s a builder showing up at a home that’s falling apart and being told to add an extra floor. It doesn’t go well and Nick misinterprets his experience in favour of the team. Now he’s stuck with them and they’re stuck with their brittle code. At least refactoring is still possible.

Obviously there is heaps of context to think about, but these are realistic possibilities for product-focused leaders in small orgs.

Spec-Driven Development and Semantics — April 29, 2026

Spec-Driven Development and Semantics

It doesn’t matter whether SDD is “Agile”, “agile”, or “waterfall”. It could be used in any of those ways.

Also I think the whole thing is a little funny. “Spec driven” meaning you specify what you want? Is there any other way? All prompting is specifying.

Real discussions are hiding in these topics, trapped behind stuck-in-the-past arguments about semantics.

I hope writing next-gen software is less repetitive — April 26, 2026

I hope writing next-gen software is less repetitive

Even though I am doing lots of AI dev, I am more interested in building next-gen software than implementing last-gen software more quickly.

Next-gen software could be more generalisable. A strong core agent + strong memory + gen UI + self-inflating internal (to the agent) deterministic software may solve many different problems with minimal or conversational customisation.

I hope the need to write similar software over and over again as quickly as possible can be mitigated instead of serviced, if we look forward a little further.

Looking at Code — April 24, 2026

Looking at Code

When I see the “I don’t even look at the code” vs “You have to look at the code” debate, I imagine 2 engineering managers having the same discussion about their devs rather than their coding agents.

I don’t think it’s very different to before. Leadership styles are so contextual:
– How capable is the dev team? (model, harness)
– What are the manager’s skills? Can they meaningfully help at that level of detail, or will micromanagement hurt?
– Are devs speaking business, or does business have to speak dev?

Has anything really changed about this debate? 🤷‍♂️

Good Advice Wasted on Coding Agents — April 23, 2026