Atomic skills: composable prompts for repeatable AI
Gareth Williams
Lead Engineer
Gareth Williams
Lead Engineer
Building a library of composable AI capabilities that compound over time
It started with a blog post about making blog posts.
I’d been using Claude to write “How I AI” posts, short pieces documenting how I was using AI in my day-to-day work. Each time, I’d paste in the same rambling set of instructions: play 20 questions to elicit context, structure the output like this, check for these things, use this tone. It worked, but every conversation started with the same five minutes of copy-paste setup. My RSI-crippled wrists were not impressed.
So I turned the instructions into a skill. For the uninitiated, skills are essentially structured prompts, a set of detailed instructions that tell an agent how to perform a task repeatedly. They can be bundled with reference material, example outputs, and even CLI commands. Think of them as the difference between explaining a recipe from memory every time you cook, and having it written down on a card.
The How I AI skill was immediately useful. I could spin up a new post in minutes, with the AI asking me the right questions to draw out the detail. But something more interesting happened. The skill was doing double duty. Every post it helped me write was also documenting my approach to a problem, effectively speccing out my next skill. The skill was creating its own offspring.
That was the moment things clicked.
The composability insight
I started cataloguing every repeatable task in my workflow. Writing blog posts, reviewing pull requests, analysing documents, creating presentations, researching topics. Each one had a set of sub-tasks that kept recurring. The 20 questions technique I used to elicit context for blog posts? I was using the same technique when refining project specs. The prose analysis I ran on drafts? Same checks I wanted on documentation. The claim extraction? Useful everywhere.
These weren’t just skills. They were reusable components.
If you’ve worked with design systems, this will feel familiar. Brad Frost’s atomic design methodology breaks a UI into its constituent parts: atoms (a button), molecules (a search bar), organisms (a site header), templates, and pages. Each level composes from the level below. The same principle maps cleanly onto skills.
In the atomic skills methodology, an atom is a core, reusable skill: elicitation techniques, analysing prose, extracting claims, checking assumptions. A molecule combines several atoms into a coherent sub-workflow, like a quality control check that runs prose analysis, claim extraction, and assumption testing in sequence. An organism is a complete workflow, like “create a blog post”, which pulls in the elicitation molecule, a research molecule, a drafting step, and the quality control molecule.
The key point is not that you decompose skills to some theoretical indivisible unit. That wouldn’t be valuable. The point is reusability. If you find yourself prompting the same instructions across multiple workflows, that’s a skill waiting to be extracted.
From spaghetti prompt to skill chain
To make this concrete, here’s what my blog post workflow looked like before and after.
Before: One enormous prompt, pasted into every conversation. It tried to do everything at once: elicit the topic, research context, draft the post, check the prose, validate claims, suggest improvements. It worked about 70% of the time. When it didn’t, I’d have to unpick which part had gone wrong and re-prompt the whole thing. Context window bloat was a constant problem.
After: A composed chain of focused skills.
- Elicitation techniques (atom) — plays 20 questions using multiple choice in the Claude UI to draw out the topic, angle, audience, and key arguments
- Web research (atom) — searches for relevant sources, counter-arguments, and supporting evidence
- Elicitation and scoping (molecule) — combines elicitation with research to produce a structured brief
- Drafting — writes the post against the brief
- Quality control (molecule) — runs analyse prose, extract claims, check assumptions, and extract wisdom in sequence, producing a structured review
- Refinement — applies the QC feedback, then runs QC again
Each step is independently testable, independently improvable, and reusable in other workflows. When I later built a “create presentation” organism, the elicitation, research, and quality control molecules slotted straight in. No re-prompting, no copy-paste, no wrist pain.
Skills that build skills
This is where it gets properly meta. Anthropic have built a skill into Claude Code that creates skills. You describe what you want automated, and the skill generates a structured skill file following their conventions. I used it extensively to build out my suite.
But the real power move was using my own skills to create better skills. The elicitation techniques atom that I’d built for blog posts? I used it to interview myself about what each new skill needed to do. The quality control molecule? I ran it against the generated skill files to check for gaps and inconsistencies. The skills were bootstrapping themselves.
This creates a flywheel. Each skill you build makes the next one faster to create and higher quality. The elicitation skill draws out better requirements. The QC skill catches more issues. The creation skills produces more consistent output. You’re not just automating tasks, you’re automating the improvement of your automation.
Where this breaks
I’d be lying if I said this was all smooth sailing. A few honest caveats. Skills can conflict with each other when composed, especially when two atoms have overlapping but slightly different instructions for the same concern. Context window management is real: a deeply composed organism that pulls in six or seven atoms can eat through tokens fast, and you need to be thoughtful about what gets loaded and when. And there’s an up-front investment. Writing a good atom takes iteration, testing against real tasks, and refinement. The payoff is real, but it’s not instant. My skillset is still in refinement, it evolves a little bit every day. PDAC is the methodology I use to achieve this. Planning a tasks execution; Delegating it to AI; Assessing the outputs and rectifying; Codifying improvements in the skills that were in use or creating new ones as needed.
Skills are not the only game in town
Skills sit alongside MCP servers, fine-tuning, and RAG as ways to extend what AI can do for you. Vercel have noted that for some tasks, skills are more reliable and context-efficient than MCP servers. That’s been my experience too. If you need to connect to an API, don’t default to an MCP server. Have AI create some code and write a skill to execute it.
It’s worth noting that the concept isn’t unique to Claude. All the AI chatbots and agentic coding tools have similar constructs. The principles of atomic composition apply regardless of the tool.
Start sharing
The most important thing to take away from this post is to start. Create a skill. Any skill. The one you’ll use tomorrow, for the task you find yourself repeating. You don’t need to adopt the atomic methodology on day one. You don’t need a taxonomy or a Mermaid diagram. Just take the prompt you keep pasting and give it a permanent home.
Once you’ve done that, start sharing. When you put skills in a shared repository, other people benefit and you benefit in turn. They’ll offer improvements, refinements, and bug fixes based on their own experiences. At Versent, we’ve built a Claude Code plugin marketplace and an AI templates library for exactly this reason. If you’re looking for existing skills to learn from or build on, Smithery maintains a curated collection for coding tasks, and SkillHub aggregates community-contributed skills across a range of use cases.
Skills, plugins, prompts, whatever we end up calling them, the ecosystem around shareable, composable AI instructions is growing fast. Anthropic are bringing skills to Claude Desktop. CoWork already supports plugins. The pattern is clear: we’re heading toward marketplaces for AI capabilities, not unlike the app stores that defined the mobile era. Except this time, the “apps” are instructions for how to think about problems, not just interfaces for interacting with data.
If that sounds like hyperbole, consider this: the most valuable thing a senior engineer or architect carries isn’t their ability to write code. It’s their judgement, their process, the way they break down problems and know which questions to ask. Skills are the first credible mechanism for encoding that expertise in a way that scales beyond one person’s context window. The teams that figure out how to compose, share, and iterate on these things will have a compounding advantage that’s hard to replicate.
The ones still typing the same prompt into a chat window every morning? They’ll plateau. And they’ll blame the AI.