Thinker CLI — March 21, 2026

Thinker CLI

I’m sharing Thinker CLI.

You’ve seen me talk about how valuable CLIs are in agent-land already:
– Self-documenting
– Model domain objects and lifecycles
– Model workflows
– Provide fast feedback
– Teach agents incrementally (rather than requiring full usage baked into a skill)
– Run by any shell-using agent
Give an agent a good CLI and it can do _the thing_ even if it doesn’t know how, because _how_ is baked into the CLI.

Thinker CLI brings all these benefits _and it’s super simple_.

Thinker lets anybody define (and share!) a guided, multi-step thought process for your agent in a JSON config file. Agents follows user directions (or automation) to use Thinker with the config file, then Thinker walks them through the multi-turn process in the config file call by call using structured inputs, structured outputs, interpolation into templates, and strict validation. This way work is presented to the agent clearly, incrementally, and validated at each step. The agent can “think through” complicated work, programmed in advance.

I’ve been using this approach – human-guided CoT sequences with structured inputs and outputs – to great effect in my projects for years now. With good design, it _way_ outperforms the generalised reasoning processes built into current models. I’m really happy I can share it in such a simple way.

Used in an agent, you can define steps for searching in memory, saving back into memory, researching online, producing complex artefacts: Thinker CLI allows you to compose any of your agents functionality in linear sequences using natural language.

Links:
– If you want to read more: https://lnkd.in/g3khXusD
– If you want to tell your agent to install: https://lnkd.in/g-SzcWiU
– Example of a coding agent running it: https://lnkd.in/gyDxBNGv (I normally use Thinker with OpenClaw, but this was easier to get logs of. You see how any agent can use it)

Taming the Y axis with coding agents — March 20, 2026

Taming the Y axis with coding agents

Imagine a starting point – whatever you’re prepared to specify up front about your software. Then far to the right on the x axis imagine a solution. It’s perfect software – exactly what you needed.

Claude doesn’t know how to get from here to there in a straight line. It’s going to do a lot of trial and error – try a few things until something works. At each decision point, Claude might be moving up or down the y axis on its journey along x towards the solution.

Part of this vertical movement is produced by a dangerous fear of crashing: ‘What if this obviously intrinsically necessary field isn’t provided? I better engineer a crazy fallback.’ Anthropic this is my number one problem with Claude Code. Another contributor to extra complexity is that Claude makes confident guesses until it’s right and then moves on; typically the first thing that works is suboptimal.

This is software engineering as a combinatorial expansion of the guesses that happen to work into code, over time and tokens. At the end you have this kind of zig-zagging line – more complicated than it needs to be, but proven working. I think *overfitted* is the right word for it.

How are you managing this? For me, with one arm (session) I’m extending and with the other I’m pushing the weird adventures on the y axis back toward the most direct route – simplifying.

Cattle not pets : IaC :: Factories not conversations : AI — March 19, 2026

Cattle not pets : IaC :: Factories not conversations : AI

“Cattle, not pets” was a good phrase for helping people understand Phoenix servers, Infrastructure as Code – that whole concept. I like it a lot.

An equivalent for working with AI right now:

Factories, not conversations.”

You are building a factory for producing output. If the output isn’t good, improve the factory. If you don’t like what it produces, arguing with the last person on the production line won’t help much.

Trunk Sync — March 18, 2026

Trunk Sync

I’m sharing again – this time a bit more fun.

Some issues I’m thinking about lately:

  • Non-technical people vibe coding and frustrated by Git
  • Very technical people running many agents in parallel and frustrated by Git
  • Forward-thinking people running multiple agents in the cloud, and having them get stuck with uncommitted code
  • The usual big-batch problems (see some of my classic topics: XP, CI etc.)

With all that in mind, here is a fun experiment: Trunk Sync. It’s maximum continuous integration for coding agents.

It keeps main and Claude-created work trees (claude -w) in sync with your remote trunk. Agents discover conflicts on write, resolve conflicted files and continue. Conflicts are resolved the same way between work trees, different hosts – it doesn’t matter. It’s just using Git and integrating on every file write.

Repo is here https://lnkd.in/gTCf6pSw and your agent can install it for you. Good example of it working here: https://lnkd.in/gpYe4X3t

Use with caution – understand this is an experimental approach and you will not be able to control what gets pushed.

Trunk-Sync and Git as a safe space — March 17, 2026

Trunk-Sync and Git as a safe space

Trunk Sync is extreme continuous-integration https://lnkd.in/gTCf6pSw for Claude Code. Easy agentic conflict resolution on write, keeps work trees, remote agents, everything aligned per write, using only Git.

I’m quite enjoying this – it was a deliberate jump out of my comfort zone (until now I kept commits under human control and resisted machine commits).

All my work is heavily tested, but naturally I am not running the whole suite on every file write – the main risk seems to be building atop an already broken build, which you could do very fast this way. So far so good, though.

AI and pulling up ladders — March 16, 2026

AI and pulling up ladders

The narrative goes something like: AI is pulling the ladder up for junior devs. Seniors are multiplying their value. The gap is widening.

I spent ten years dragging many of those seniors out of Gitflow. They’re not all thought leaders on LinkedIn – they spend half their time navigating org charts. TDD was too wild for them, XP just a naive dream. How many will adapt fast enough?

Even people prepared to change are constrained by orgs that aren’t. Most orgs are trying to fit AI into existing structures instead of rethinking them. Untangling the gordian knot of each org takes great leadership, effort, and time. How many will adapt fast enough?

This is the great flattening. The next generation doesn’t need a ladder to waterfall thinking, sprint planning, and 3 square meetings a day. They don’t need narrow roles, outdated processes, or $30M in seed funding. They don’t need to adapt – only grow. They can just build things for people.

It’s the orgs pulling up ladders that should be worried.

Steering with fast feedback — March 15, 2026

Steering with fast feedback

Fast feedback works very well for Claude Code harnesses.

Opus 4.6 is very good at hypothesising about how to do something, but quickly conflates its hypothesis with fact and overcommits to the (possibly) wrong approach.

If you provide DX that lets it fail quickly, accumulate context and hypothesise again, it can solve outsized problems.

I’m interested in less planning and more *steering* using fast feedback.

Iteration is for everybody — March 14, 2026

Iteration is for everybody

At Thoughtworks I participated in and ran this exercise where 2 teams do civil engineering projects with dry spaghetti and putty. Team Big Batch would build their tower (or whatever) in 1 hour, and Team Small Batch would build their tower in 20 minutes, and then get to do it again, and then get to do it again. Guess who wins?

My software engineering career benefitted a lot from this principle. I learned fast pairing with great engineers, but also by trying and failing in DX that provided very fast feedback. In our strongest teams, we put great care into building and improving our feedback loops. As an organisation, we built our own tools to do it.

Now I’m seeing the principle in action with coding agents. With a harness (that’s what we call DX for coding agents, I think) that provides fast feedback, coding agents can try, fail, and accumulate context until they succeed. Sometimes they get lucky, but typically success came because somewhere in that accumulated context there were answers – or even insights. What should we do with them? Note that you and I have benefitted from our insights over decades.

One project ended just as our team was reaching an incredible velocity. Everything had come together and we were absolutely smashing it, then we had to go home. What an accumulation of context we must have had when we threw away all our leverage – but that was business. We shouldn’t do it that way again with agents.

Token speed and AFK dev orchestration —

Token speed and AFK dev orchestration

Since I started posting about how token speed is the easiest optimisation and maintains developer engagement (if that’s a thing you still want!), OpenAI started offering Codex at 1200 token/sec and Anthropic added a /fast mode to Claude where you pay extra to run Opus faster.

I’m really interested in AFK dev and working on things in that space at the moment, and the effort going into autonomy is still very important. But I do wonder how we would feel about the effort going into parallelisation and overnight runs if we had say 2000 token/sec inference with our favourite models.

Subtraction and simplification — March 13, 2026

Subtraction and simplification

Another typical coding agent problem is adding and adding and adding until the problem is solved, then walking away without really understanding what solved the issue, or building that understanding into the codebase. A good question to ask, or incorporate into your instructions and rules as appropriate for your agent:

“Can this be achieved by subtraction or simplification, rather than addition?”