I’ve been baking some favoured ways of working into my coding agent as project DX, OpenCode config, skills and rules for a little while. I can already work much faster this way than I could by myself, or with Antigravity. Reliability is high, output is good, and I’m working at a level of abstraction that I really enjoy. My mentality therein is something like ADDD https://lnkd.in/gsAizijA (thanks Obie Fernandez for the link).
I also have fine-grained control over planning and behaviour using test-trees as contracts. The codebase is well controlled, tested, and documented, benefitting from good adherence to my preferred practices.
Once people get to some stage they are happy with, I see they often optimise by running Ralph loops overnight. Maybe it’s not for me. I don’t want to work asynchronously just because the agent is too slow or unreliable, I want the agent to be faster. I want to go as fast as I can think at my current level of abstraction, which I am really enjoying.
Parallelisation also doesn’t sound great – all the old points about cycle time over throughput still apply. I don’t want to architect for easier parallel dev (that’s a compromise we’ve made too much in the past already), or have to integrate a bunch of work trees, or increase my mental load with things that are lower priority anyway, or start new work before learning from the last piece of work.
For now I can think way faster than GPT 5.2 codex high can work (on this project, with my workflow), so I’m on the hunt for more tokens per second. I’m not the first – I’m waiting for faster coding plans to drop and reportedly they will sell out in minutes. When I can continue working this way at a few thousand tokens per second, it will be an absolute delight.
Once you’ve got it working, the easiest optimisation is probably going faster.
