·3 min read

The spec is the product

I used to think specs were overhead. Documents that satisfied a process but didn't improve the product.

That changed when I started pointing agents at my specs and watching them build. The gap between "I know what to build" and "here it is" collapsed. But the work of knowing what to build didn't get easier. It got more important.

An engineer at Anthropic recently wrote a spec, pointed Claude at an Asana board, and went home for the weekend. Claude broke the spec into tickets, spawned agents for each one, and they started building independently. When an agent got confused, it ran git-blame and messaged the right engineer in Slack. By Monday, the feature was done.

That story fascinates me because it exposes where the real work lives. Not in the implementation. In the spec. You can't write a spec that holds up when multiple agents execute it in parallel without genuinely understanding what you're building. Architecture, edge cases, failure modes, integration points. The spec is the product thinking.

What goes into a good spec

A good spec starts with context: what problem exists and who has it. Not "users need a dashboard" but "board members check compliance status weekly and currently dig through three spreadsheets to find it." Real people, real pain.

Then behavior, step by step, including what most people skip. What happens when the request fails? When the list is empty? When the user doesn't have permission? I also write down what we're explicitly not building. Without boundaries, AI fills in the gaps with reasonable-looking features you never asked for.

The last section is concrete examples. "When an improvement has been fully depreciated, show 0 kr remaining value and grey out the row." That sentence does more work than "handle edge cases appropriately" ever could.

Context curation

The quality of agent output is directly proportional to the context you feed it. Early on, I'd give vague instructions and wonder why the output missed the point. Now I maintain context docs that I feed to agents before starting any project: who the users actually are, what they've said in their own words, examples of what good looks like, and what we've already tried and killed.

When an agent has that context, it doesn't start from zero. The output fits because the input was specific.

The unsolved part

I'd be lying if I said this all works perfectly. Verification is still the biggest gap. Tests that pass on the pre-patch codebase, UI that looks right but handles nothing off the happy path, bad coding patterns that leak into the context window. We need better models and better evaluation infrastructure before "write a spec and go home for the weekend" becomes the default.

But the direction is clear. The effort moved upstream. I spend more time writing specs than I ever spent writing code, and the output is better for it.

This is what Wayform does. It turns rough ideas into detailed specs because the spec is where the real product decisions happen. Everything after is execution.