Codex and Claude are way too defensive. I think this is a good time to talk about defensive programming.

Say I believe some scenario is impossible, and should it be true there will be an error – a console error, request failure, something noisy – but life will go on.

This is actually good. I am probably not wrong, so there is no reason to complicate my code. If I am wrong, great! Through failing fast (and good observability) I will discover my wrongness and we will all be better off for it. The effects of being wrong in software are cumulative and sometimes fatal, so we want to uncover wrongness early.

Building a good understanding of how data actually flows through your system is important. You should not just guess. You also should not defend against everything by default. You should actually check, be confident that you know, and be eager to discover that you are wrong.

The worst thing is taking a wild guess at how some unexpected edge case should be handled, when you really have no idea why that would have happened, or what the downstream implications of your handling will be. It is routine for coding agents to mishandle, downgrade to warning, or just completely swallow errors that reveal critical misunderstandings and concordant design problems in your (their?) software.

Coding agents love defensive programming. There could be many reasons for this, but two come to mind:
– They just don’t want it to crash, like the early JS mentality.
– They don’t want to “miss an edge case”, perhaps reflective of a lot of training data produced by people who didn’t want to “miss an edge case”.

When you vibe code (not agentically engineer, or whatever we’re calling it) and everything looks amazing, how much of the implementation is just failing quietly because of defensive programming? Perhaps it helps explain the early-euphoria-hard-crash we saw many vibe coders go through.

Instruct your coding agents to fail fast and loud.