Most skills are just prompts
I've been using Claude Skills since they launched. They're the single best feature in Claude Code. But scrolling through Twitter, I keep seeing "skills" that completely miss the point.
What skills actually are
Anthropic defines them as modular capabilities that extend Claude's functionality. In practice, a skill is a SKILL.md file containing your opinionated approach to solving a specific problem. Workflows, context, best practices. Claude loads the skill when relevant and follows your process instead of guessing.
The word opinionated is what separates a skill from a prompt.
LLMs are trained on everything from beginners to experts. The output tends toward the average. Your know-nothing uncle's take on software architecture weighs just as much as the principal engineer's in the training data. The model splits the difference.
A skill overrides that averaging. It tells Claude: here's exactly how I want this done, step by step, with examples of it done well.
What I keep seeing
Someone tweets "I built a marketing skill" and you open it up. It says: "You are a marketing expert. Research app strategies for AI solopreneurs." That's a prompt saved to a file. No workflow, no specific context, no best practices.
I've seen this pattern over and over. A "security audit skill" that says "thoroughly investigate security and fix it." A "code review skill" that says "review the code carefully." None of these change Claude's behavior because they don't contain any actual expertise.
The debugging skill I use daily has four phases: root cause investigation, pattern recognition, hypothesis formation, then fix. Each phase has specific rules. Jumping straight to "it's probably X, let me fix that" is explicitly flagged as the wrong approach. That structure is what makes it a skill and not a prompt.
The three things every skill needs
A good skill has workflows: a specific sequence of steps. Not "do research" but "first check usage data, then cross-reference with error logs, then form a hypothesis." The order matters because it prevents Claude from skipping straight to conclusions.
It has context: domain knowledge, vocabulary, few-shot examples of the task done well. A writing skill I use includes examples of good and bad output side by side so Claude can see the difference before it starts generating.
And it has best practices: opinionated rules. "Start with a declarative opening line." "Vary sentence length." "Never skip root cause investigation." These are things Claude wouldn't reliably do on its own.
If your skill file doesn't contain all three, you're just prompting with extra steps.
The test that matters
Run the same task with and without your skill. If the output quality is roughly the same, your skill isn't adding value. A good skill produces noticeably different output because it steers Claude away from average responses toward your specific process.
If you can't articulate what makes your approach different from Claude's default behavior, you don't have a skill. You have a prompt.