·2 min read

The design input problem

AI made it easy to ship UI fast. Shipping it well is a different problem.

Whether you're prompting freestyle or writing structured specs, the output quality ceiling is set by the design inputs you provide. Ask Claude to "add a settings modal" and you'll get a functional modal. It will work. It won't look like something a designer touched.

I kept hitting this with Andelo. The components worked. The spacing was close enough. But something always looked off, and I couldn't always say what. This is where vibe coders get stuck. It's also where agentic engineers stall: the components function, but they don't feel production-ready.

The bottleneck is the input material. The AI generates from statistical averages when it could be pulling from a real design system. The gap between "a button that looks reasonable" and a UntitledUI button is the gap between a prototype and a product.

So I built an open-source MCP server that gives AI agents direct access to UntitledUI's component library. Instead of generating a modal from scratch, the agent fetches the real UntitledUI modal with all its base dependencies: buttons, inputs, icons, layout primitives.

The workflow shifted. "Add a settings page" used to mean generating layout, styling it, then iterating on spacing until it stopped looking off. Now it means fetching the UntitledUI settings template and adapting it to my data model. The starting point is already designed.

Works with Claude Code, Cursor, and VS Code. Base components are free; pro components need a UntitledUI license.

As AI gets better at generating code, the bottleneck moves to design quality. Better prompts won't close that gap. Better building blocks will.