I’ve been building agentic systems for a while now, and I want to reflect on some of the dominant thinking in the circle of people doing that kind of work.

Multi-agent solutions feel natural to people who understand the value of teamwork. But are we projecting our human need for specialisation onto a new generation of agentic systems, thereby passing our limitations onto them unnecessarily?

When people are trying to do something that seems to exceed the capabilities of their AI system, they usually introduce greater up-front orchestration, which helps short-term – but it’s a trap! As the model gets smarter, the orchestration becomes redundant – wasted effort, which you paid a high opportunity-cost for. I got burned that way when I first started.

I’ve found going the other way is usually a better decision. I consider how I can empower my model with more context, rather than treating it like a dummy by giving it narrower instructions and a more specialised role. Context is king.

When pursuing longer-term or high-complexity objectives with AI, we should think less about multi-agent solutions, and more about multigenerational solutions. It’s not about architecting the perfect swarm up-front and trying to force fuzzy LLMs into rigid behaviour. Rather, it’s about carrying on the work over many generations, with each generation able to revise past work and continue, setting new proximal goals, while bearing in mind the distal goal. Therein, the fuzziness introduced by LLMs is essential, rather than detrimental.

To put it another way, It’s recursive graph reversal, but the graph is a family tree being built dynamically. Children revise and continue the work left by their parents, and their understanding of the problem begins to exceed that of earlier generations through the buildup of additional context – of history – ultimately leading to better outcomes.

This has proven to be a good framework for thinking about problem solving with AI – multigenerational operations, with correction and autonomy in each layer of the family tree. It’s more like building the Kailasa temple than traditional computing.

Just months ago, I thought of these generations as being context windows, modelled as nodes in a graph – relatively short containers of understanding and output, used to delegate to, and build context for, subsequent context windows / nodes. These days, advances in agentic memory and my use of rolling context windows have me focusing more on building and leveraging layers of understanding in a more continuous way. But I still find thinking multigenerationally really helpful whenever I’m tempted to increase orchestration.