AI content all sounds the same. Here's a Claude Code skill to fix it.
Why everyone's using AI is creating the same content and how the Opposite-Start skill solves it
One of our most successful AI initiatives at HubSpot has been integrating AI across all of our prospecting. It’s equated to over 10k additional meetings each quarter. What’s interesting is that the most valuable part of our AI prospecting agent wasn’t the prompts for crafting the email; it was the ‘do not say list’. Every phrase, opener, and structural pattern that the agent was never allowed to produce. No “I hope this finds you well.” No “quick question.” No, “I came across your profile.” No em dashes. No opening with the recipient’s name on its own line.
We added to it constantly. Every batch of outreach taught us something new to ban. The list grew, the results improved, and after a few months, the list had become far more valuable than the prompts themselves.
Hemingway wrote about this in 1932. He called it the Iceberg Theory:
“If a writer of prose knows enough about what he is writing about he may omit things that he knows. The dignity of movement of an iceberg is due to only one-ninth of it being above water.”
What you don’t say is the seven-eighths underneath. The reader feels the mass even when they can’t see it. That’s how you craft marketing that truly engages your audience.
There’s a reason AI content all sounds the same, and it’s not that the models are bad writers.
AI collapses the starting point.
When a million marketers open Claude and type “write me a LinkedIn post about X,” the model begins from the same place for every single one of them. Same training data. Same defaults. Same pull toward the same frames, the same openers, the same conclusions.
The output is commoditised because everyone is starting in the same position.
Rory Sutherland has a line for this. “The opposite of a good idea can also be a good idea.” His example is low-cost airlines. The legacy carriers were all competing on the same axis, comfort and service, each trying to be a marginally better version of the same thing. Ryanair won by starting from the opposite end of the spectrum entirely. They won on cost.
Using AI means you start at the same place as everyone else. That’s why we’ll see all content start to converge. AI means everyone starts in a red ocean of ideas.
That’s why I built a Claude Code skill for my content system called the “Opposite-Start.”
The purpose of the skill is to help you change the starting point of your marketing or content idea. I’ve found it an incredible asset to not get pulled into the sea of sameness.
Here’s how it works.
Step 1: Map the current conversation.
Before writing a single word, the skill searches across X, Reddit, LinkedIn, and the web for how people are currently talking about the topic. It clusters what it finds into 3-5 core themes, names sources where it has them, and rates each cluster by saturation, dominant take, growing, or niche.
Then it writes one paragraph of synthesis. The centre of gravity. The default position that every AI-generated post on this topic is about to converge toward.
This is the position you’re going to start somewhere else from.
Step 2: Generate 6 inversions.
Not lazy opposites. Six different types of inversion, each one a genuinely different starting position:
Reframe lens — flip the core mechanism. Ask what the dominant story is hiding. If the popular take is “outcome maxing beats token maxing,” a real reframe isn’t “tokenmaxing is fine.” It’s something like: “the outcome maxing narrative is giving CFOs a respectable reason to veto every AI experiment that hasn’t already proven its return — which means the teams getting shamed for burning tokens today are the only ones building a library of working use cases for 2027.”
Tension lens — find the real operator disagreement. Ask where credible people are making different calls right now. Not theoretical pros and cons. Actual split among operators you’d trust. Your reader is living in that indecision.
Hidden cost lens — price the second-order effect. Ask what the CFO will be asking about in 18 months that nobody’s tracking yet. Everyone’s watching the visible invoice. The real cost shows up later, somewhere else, in a number nobody’s building a dashboard for.
Leading indicator lens — flip the time horizon. If the dominant take lives in the present, move 18 months out. If it’s future-tense hype, move to what already happened. The signal is usually hiding in data nobody’s reading.
Category error lens — change the question. Same topic, different tension. The dominant take frames one conflict. You find a different conflict hiding inside it. “Should marketers use more AI” is a weak question. “Should marketing workflows still have humans in the default path” is sharper.
Counter-case lens — change the hero. Most content on a topic has a default protagonist. Flip who you’re defending. The 12-person agency out-shipping the 80-person one. The skeptic who quietly rebuilt the workflow.
Each inversion has to pass one check: could a serious practitioner defend this with real experience? If not, it’s a gimmick.
Step 3: Stick-test each one.
For each inversion, the skill names the tension it creates, who it speaks to and who it doesn’t, and the cost of being wrong, what does the reader have to decide differently if they agree?
Then it runs three questions. Would anyone actively disagree? Would a senior practitioner say “I hadn’t thought of it that way”? Can the writer actually prove it?
Each inversion gets scored strong, mixed, or weak. Then the skill picks one and explains why.
I ran Opposite-Start on a topic I’ve been thinking about for weeks: tokenmaxing for marketing.
The default take it surfaced was the one you’ve seen fifteen times on LinkedIn this month. Stop celebrating token burn. Start measuring outcomes. Outcome maxing beats token maxing.
The skill surfaced four inversions. The one it recommended:
The tokenmaxing vs. outcome maxing debate isn’t a productivity argument. It’s a vendor positioning war. The loudest voices pushing the outcome maxing frame sell workflow software that becomes more valuable if customers measure outcomes their platforms already track. What looks like a principled metrics debate is actually about who gets to own the AI line item in your budget: the foundation model providers, or the application layer.
That’s a genuinely interesting take.
The evidence is all public. AWU announcement dates. CEO LinkedIn posts. Pricing pages. And the enemies list is sharp — every vendor whose software conveniently measures the outcomes they’re telling you to optimise for. A good angle has enemies.
I also ran it for GPT5.5, which was released last week, it came up with the following:
The angle: “GPT-5.5 isn’t a new model — it’s a pricing reset. OpenAI doubled the per-token sticker price and told GTM teams to stop counting tokens.”
It scored highest because it’s timely (ties to yesterday’s specific launch), non-consensus (the whole press cycle is running benchmark tables and pricing-outrage takes — this one goes underneath both to a third explanation), and practically actionable (hits P3 directly: a cost-per-outcome framework your audience can take into their next CFO meeting). The anchor story uses OpenAI’s own press-call language — Brockman’s “more frontier AI available” paired with Pachocki’s “model progress has been surprisingly slow” — as the evidence that the vendor themselves signaled the reframe.
Runner-ups worth banking for future posts:
- Category error — “Spec clarity is the moat; model choice is a rounding error” — evergreen, runs any week
- Leading indicator — “Multi-model GTM stack, who owns routing wins” — better for X Article or podcast
- Hidden cost — “The integration tax” — needs 4-6 weeks for anecdotes to surface
To note, the above is just the high-level output, it creates a full brief for the recommended angle to turn into great content.
Now, the objection I’d have if I were reading this: “Kieran, this only works when there’s an existing public debate to map. What about topics without one?”
Fair. The skill is only as strong as the conversation it can find. For a topic with no real public discourse, Step 1 is thin and the inversions get shaky. The real use case is topics where a consensus is forming and you want to find somewhere else to stand before everyone else arrives at the same position.
Most topics worth writing about are in that bucket.
Install
It’s free. MIT licensed. Python standard library only — no pip install, no dependencies to manage.
git clone https://github.com/searchbrat/singleangle ~/.claude/skills/singleangleYou’ll need Claude Code and, for the best results, OpenAI and xAI API keys (for Reddit and X search, respectively). Without keys, the skill falls back to WebSearch-only mode — usable, but thinner.
Full setup: https://github.com/searchbrat/singleangle
#This skill can 100% be improved. How I do this is I run it and then immediately provide feedback to Claude on what worked and what didn’t work and then ask it to update and reship. This is likely 70%, I have a version I think is 100% but it’s more complex and tied into my overall content system. Will share via a paid post. But this one is REALLY good.
AI makes everyone’s starting point the same. Using AI to start at a totally different point is a great way to find differentiated takes.
What you don’t say is the first half of taste. Where you start is the second.
Until Next Time,
Happy AI’fying,
Kieran




