How to build marketing systems in Claude Code
What I learned building a system where skills chain together and outputs get smarter every run
As readers of this Substack know, I’ve spent months building an AI marketing system in Claude Code. I’ve shown parts of the content team. It’s a full marketing system with specialised skills, shared memory, feedback loops, and orchestrator skills all tied together.
It’s not finished :)
But I’ve learned a lot.
I wanted to pause and list out some learnings from building entire systems - where skills chain together, where outputs are automatically fed into inputs for other skills, and where the system gets smarter with each run.
Here are 5 things I learned about system building in Claude Code.
1. Every marketing team needs at least one Claude Code-pilled builder
Not everyone on your team needs to know how Claude Code works. But someone does.
That person builds the skills. A skill is a SKILL.md file, a structured instruction set that tells Claude exactly what to do, what data to read, what scripts to run, and where to save the output. Think of it as packaging your best marketer’s brain and skills into a repeatable workflow.
An audience profiler that researches an ICP and writes a structured brief. A competitive intel monitor that scrapes competitor content and extracts positioning changes. A campaign brief generator that reads your brand guidelines and produces briefs in your format. A performance analyser that flags what’s underperforming.
*Note: yes, these are all part of my marketing system and all part of future posts.
The builder creates these skills, tests them, and gets them working reliably. Then they package the whole thing as a custom MCP server.
MCP — Model Context Protocol — is how you turn a collection of skills into a product your whole team can use. The builder runs the MCP server. Everyone else connects to it from Claude Desktop. They get to use all the best skills maintained by the most Claude Code-pilled person on your team.
One builder. One MCP server. An entire team multiplied.
The mistake most teams make is trying to make everyone an AI power user. That doesn’t scale. What scales is one person who understands the architecture, packaging it into an MCP server that everyone else connects to from the tool they already use.
2. It’s the system that compounds
Here’s what separates a good AI workflow from a great one: the systems thinker asks different questions. Not “what should this prompt do?” but “what does this skill read from? What does it write to? What breaks downstream if this output is wrong?” How does it get better over time.
Create a ./profiles/ folder. Put your audience profile in it. Your brand voice guidelines. Your competitive positioning doc. Your ICP research. Now build every downstream skill to automatically read from that folder.
Now, if you change the audience profile once, update a pain point, or add a new competitor, every skill reads from it, and results improve.
It’s not an isolated skill. It becomes an architecture.
It makes iteration much easier. Over time, you see that your audience is price-sensitive, so you update your audience profile to reflect that. Next time you run positioning, it incorporates that into its flow.
Having foundational profiles that power the entire system makes it much easier to change and adapt the system.
Most people are building isolated prompts and skills. Building systems is much harder, but wow, the results are +10x more powerful.
3. Separate your layers, or your system will fight itself
I learned this the hard way.
Early on, I bundled multiple tasks into the same skill. For example, you can have one massive instruction set that was supposed to research the market, extract insights, pick the best angle, and produce a finished deliverable. It results in sub-par output.
It was much easier to separate skills into single tasks across layers.
Here’s the mental model that works:
Layer 1 — Foundation: Skills that create the files that everything else depends on. Audience profiles, brand voice docs, and competitive positioning. They power the rest of the system.
Layer 2 — Research: Skills that gather and rank raw material. Market research, trend monitoring, competitor tracking, and idea scoring.
Layer 3 — Execution: Skills that produce outputs, campaign briefs, ad copy, content drafts, email sequences, reports. They read from Layer 1 (who are we talking to?) and Layer 2 (What’s the material?) and produce the thing.
Layer 4 — Feedback: Skills that measure what happened and feed learning back into Layer 1. Performance tracking, monthly reviews, and A/B test analysis.
Each layer writes to a specific folder. Each layer reads from the layers above it.
When something breaks, you know exactly which layer to fix. When you want to improve research quality, you touch Layer 2 without risking your execution output. When you want to add a new capability, say, generating landing page copy, you add one skill to Layer 3.
4. Feedback loops are what help your system evolve, rather than stay static
I’ve learned that adding feedback loops across the system vastly improves the output.
Here’s what a feedback loop looks like in practice: every time a deliverable ships, a post goes live, a campaign launches, an email is sent, a logging skill saves a structured record.
What was the input? What skill produced it? What were the results? JSON. Machine-readable. Saved to a /performance/ folder.
Then, on a regular cadence, a review skill reads those records and finds patterns. Which approaches are working? Which audience segments respond to which messaging? Which channels outperform for which content types?
That analysis feeds back into your foundational files. The audience profile gets sharper. The brand voice doc evolves. The system literally rewrites its own foundations based on what actually worked.
Next month’s outputs are better than this month’s.
5. The orchestrator is the product.
Here’s the test: could someone on your team who knows nothing about your skill architecture walk up to the system and get a finished deliverable?
If the answer is no, if they need to know which skill to run, in what order, with what inputs, you don’t have a product. You have a collection of parts.
The orchestrator is the skill that solves this. It scans the project state, the foundational files, the available research, and whether there’s unprocessed feedback. It presents the user with simple options: “What do you want to create?” The user describes the outcome. The orchestrator chains the right skills together in the right order, end to end.
Build the orchestrator last. But design it from the first skill you write. Every skill should read from predictable paths, write to predictable paths, and produce output that the next skill in the chain can consume without transformation.
BONUS: Claude Code (run /insights)
Here’s a killer bonus tip. Within your projects folder, when running Claude, run /insights, and you’ll get a ton of valuable information about your Claude Code usage.
The most important being recommendations.
This is Claude recommending how you can improve the system. Simply ask Claude to implement its recommended solutions. Pretty epic.
Yes, I will keep doing deep dives on parts of the system and exposing skills. But I believe the above will help those who want to build any system in Claude Code for their own purposes.
Until Next Time, Happy AI’fying,
Kieran






