Stop generating what you can look up

Mar 31, 2026 · Thought

Generation is the most visible thing LLMs do, so it's what gets reached for even for parts already solved by a database, an API, or plain code.

The real work isn't the prompt. It's the line between what you generate and what you compute.

Use the model for judgment calls, the ones with no single right answer. Use plain code for everything else. It knows, it doesn't guess.

Build around AI, not through it.