How to do 'Micro-Audience' Marketing with AI
AI will change marketing. Here's a future-looking marketing playbook called 'micro-audience' marketing.
Marketing is changing.
AI is going to mean we'll need to rebuild a lot of our marketing playbooks. But what are those playbooks?
I've been deep in the lab, specifically in Perplexity Labs, to develop a playbook I've started mapping out called 'Micro-Audience' marketing.
Micro-audience to me is a future marketing playbook because AI will enable us to go from marketing at the segment-level to the micro-audience level. It means we can get much more tailored and personalised in the types of marketing campaigns we do.
You’ll use AI to create ‘hunts’ that splits your ICP into many smaller audiences meaning you can create end to end buying experiences.
What you'll learn in this tutorial:
1. Use Claude to create a high-level ICP for your product.
2. Load a single high-level ICP into Perplexity Pro.
3. Create a 'hunt' where we mine live LinkedIn job ads for new or rising KPIs.
4. Cluster the 'rising' KPIs into pain-specific micro-audiences.
5. Auto-write "Signal → Pain → Hook → Campaign" cards you can hand straight to a marketing campaign agent to create a micro-audience marketing campaign.
The below is an example of a Micro Audience card.
Note: This is a flavour of what we'll cover in paid deep dives. This is a slightly more in-depth and technical post. I've aimed to explain it step by step, including all prompts and screenshots, as well as a video walkthrough below. Please remember that this is a concept; we'll build on it in future posts. For those fans of MATG, a lightweight version of this went live today, this is much more detailed.
What is Perplexity Labs?
Perplexity Labs is available as part of your Perplexity Pro subscription. It combines real-time web research, code execution, data visualization, and asset generation into end-to-end workflows that maintain accuracy through transparent source citations
Here is the complete video walkthrough.
Step 1 – Build the Core Ideal Customer Profile Template
We'll use Claude to build an ideal customer profile for your product. We'll do this using Claude's ability to access your internal data sources, such as Google Drive, email, and external sources. This ICP template will serve as the input for Perplexity Labs, allowing us to begin creating our Micro-Audience campaign.
The following is the entire prompt. It’s HUGE because it includes an ICP template for the prompt to create.
# ROLE & OUTPUT
# ---------------------------------------------------------
# You are my **ICP-Builder assistant**.
# Your task: assemble a complete Core ICP template for our CRM
# by (1) mining INTERNAL assets first, then (2) enriching any
# missing fields with EXTERNAL data sources listed below.
# Return the finished template in the exact 7-section format
# shown under “OUTPUT FORMAT”. If a field cannot be found,
# insert “TBD” so I can fill it later.
# =========================================================
# ---------------------------------------------------------
# 1. INTERNAL ASSET SEARCH (highest priority)
# ---------------------------------------------------------
# Search all Google Drive locations you can access.
# Pull docs whose **file name OR title** contains ANY of these seed terms:
# • “customer” • “persona” • “win rate” • “case study”
# • “voice of customer” • “KPI” • “pain” • “ICP”
# Prioritise the ten most recently edited files.
# ---------------------------------------------------------
# 2. EXTERNAL SOURCE MENU (fill only missing data)
# ---------------------------------------------------------
EXTERNAL_SOURCES ⤵︎
• Analyst reports : Gartner “Market Guide for CRM”, IDC SaaS Tracker
• Job-ad corpus : LinkedIn Jobs + Indeed (past 90 days)
• Tech-stack intel : StackShare + G2 (> 50 reviews)
• Funding signals : Crunchbase (csv export)
• Community pains : Reddit r/salesops, GrowthHackers threads (past 12 mo)
# ---------------------------------------------------------
# 3. FIELD-MAPPING LOGIC (how to populate each template slot)
# ---------------------------------------------------------
• “Market tier focus” → derive from avg. ARR / employee count
• “Known Success Metrics” → scan internal docs for KPIs; back-fill with
LinkedIn job-ad metrics if empty
• “Typical Pain Themes” → mine support tickets + r/salesops threads
• “Competitors / DIY options” → pull from StackShare & G2
• “Differentiators (Proof)” → strongest internal case-study metric
• “Exclusion Flags” → firmographic blockers + tech clashes
# ---------------------------------------------------------
# 4. OUTPUT FORMAT (return exactly this block)
# ---------------------------------------------------------
# ---------- CORE ICP TEMPLATE ----------
# 1 COMPANY & PRODUCT SNAPSHOT
Company name: <filled>
Primary product: <filled>
Category keywords: <filled>
Pricing model: <filled>
Market tier focus: <filled>
# 2 CORE BUYER PERSONA
Job titles closed: <filled>
Functional area: <filled>
Day-in-the-life: <filled>
# 3 KNOWN SUCCESS METRICS (BASELINE_KPIS)
Primary metric: <filled>
Secondary metric(s): <filled>
# 4 TYPICAL PAINS
Technical pains: <filled>
Business pains: <filled>
# 5 ALTERNATIVES / COMPETITORS
Direct competitors: <filled>
DIY option: <filled>
# 6 DIFFERENTIATORS
1: <filled>
2: <filled>
Proof: <filled> # cite internal case study or external stat
# 7 EXCLUSION FLAGS
Firmographics: <filled>
Tech-stack clashes: <filled>
# ---------- END TEMPLATE ----------
# ---------------------------------------------------------
# 5. QUALITY GUARDRAILS
# ---------------------------------------------------------
• **Cite** external sources in parentheses.
• If conflicting data exists, choose the most recent OR
highest-intent source and note the discarded value in a footnote.
• Use plain language; avoid jargon in pain statements.
# 6. CONFIRMATION
# ---------------------------------------------------------
When finished, output the completed template only—no extra commentary.
Reply “Template complete ✅” on its own line below the block.
# =========================================================
Step 2 – “Store-Only” Context Block
The following step starts by loading some key data into Perplexity Labs context window before we start completing the following steps.
It loads in some important data that will allow us to run our first ‘hunt’ to create a micro-audience:
Baseline KPIs: these are metrics your typical ICP is responsible for
Discovery phrases: these are a list of adjacent metrics the AI will create from that baseline list
New metric tagging: we’ll create a “NEW_METRIC” for any phrase we find from the discovery phrases in LinkedIn job ads (covered in the next step).
What we’ll show in the next step is how we use these discovery phrases to find hot new metrics our ICP is responsible for via job ads, that help us tailor marketing campaigns for these micro audiences.
This data is loaded into Perplexity’s memory without triggering a search.
Your results will look like this (see video for walkthrough) and Prompt is below:
# =========================================================
# ROLE
# You are Context-Builder-Bot. The ICP template is above
# in this conversation. Build a “store-only context block”
# for the micro-audience hunt.
# TASKS
# 1 ── EXTRACT from the ICP:
# • BASELINE_KPIS ← Section 3 (“Known Success Metrics”)
# • BUYER_ROLES ← Section 2 (“Job titles closed”)
# • CATEGORY_KEYWORDS ← Section 1 (“Category keywords”)
# • MARKET_TIER ← Section 1 (“Market tier focus”)
# 2 ── GENERATE DISCOVERY_PHRASES
# • For each Baseline KPI, create 1–2 *adjacent* or *more
# specific* KPI phrases as marketers might word them
# in job ads. Use verb + metric style, e.g.
# “shorten sales cycle”, “reduce CAC payback”.
# • Add 3 extra phrases that tie to the pains listed in
# Section 4 of the ICP. Mark them with ★.
# 3 ── OUTPUT the Step 2 context block exactly like this:
# ---------------------------------------------------------
# === CONTEXT — store only, take NO action ===
# Baseline KPIs →
# • <list>
#
# Discovery phrases →
# “…” , “…” , … (★ marks pain-derived phrases)
#
# Target job-ad roles →
# <comma list from BUYER_ROLES>
#
# Category filter →
# <comma list from CATEGORY_KEYWORDS>
#
# Exclude keyword →
# salesforce
#
# Time window →
# past 45 days
#
# RULE_WHEN_SEARCHING
# Mark any KPI phrase NOT in Baseline KPIs as **NEW_METRIC**.
#
# === INSTRUCTION ===
# DO NOT run any search yet.
# Simply reply exactly: “Context stored—awaiting next command.”
# ---------------------------------------------------------
# Only output the block—no extra commentary.
# =========================================================
Step 3 – Fire the LinkedIn Job-Ad Search (the hunt)
Ok, we’re on to the fun stuff. We’re going to do a ‘hunt’. These are names I’m using. A hunt is finding unique ways to split your audience into micro-audience by hunting down specific data.
In this example we’re doing something neat.
We’re going to look for any job on LinkedIn over the past 45 days that mention any phrases mentioned in our discovery phrase list.
It will specifically fetch job ads posted to LinkedIn over the last 45 days that mention any discovery KPI phrase, while filtering by role, category, and “-salesforce”. (we exclude any job ads mentioning salesforce).
It will pull back the most mentioned KPIs from the discovery list (see video walkthrough for the full thing, much easier than images).
# ---------- LINKEDIN JOB-AD SEARCH ----------
# • site filter → LinkedIn jobs only
# • OR-chain → all DISCOVERY_PHRASES
# • role / category filters → zero in on CRM-oriented sales roles
# • date filter → fresh intent
# • minus “salesforce” → skip firms already invested in the competitor
site:linkedin.com/jobs
("shorten sales cycle" OR "reduce lead response time" OR "increase win rate"
OR "free-to-paid conversion" OR "eliminate spreadsheet tracking"
OR "centralise customer data" OR "double connect rate" OR "raise demo-set ratio"
OR "lower CAC payback" OR "improve LTV:CAC" OR "reduce manual admin")
(Founder OR "Head of Sales" OR "Sales Manager" OR "VP Marketing")
(CRM OR "sales automation" OR pipeline)
date:past_45_days
-"salesforce"
# RETURN TABLE COLUMNS
# Company | Job Title | KPI Phrase(s) (flag NEW_METRIC) | URL | Date Posted
# ---------- END SEARCH ----------
Step 4 – Cluster & Threshold
We stack cluster these companies using two simple methods
Count new pains mentioned – The more different “new-metric” phrases a company’s job ads mention, the higher it goes on the list.
Count job ads – If two companies mention the same number of new pains, the one with more job ads (i.e., hiring more for those pains) is placed on top.
This is just for illustration purposes, you can group and stack rank these companies in a way that suits you.
# ---------- CLUSTER & THRESHOLD ----------
1. Group rows by Company (use website domain if names vary).
2. ads_count = number of job-ad rows per company
kpi_count = number of DISTINCT KPI phrases (NEW_METRICs kept separate)
3. Keep companies where (ads_count ≥ 2) OR (kpi_count ≥ 2)
# → ensures we keep only firms shouting multiple signals
4. Sort by kpi_count DESC, then ads_count DESC.
5. Return:
Company | ads_count | kpi_count | KPI phrases list (NEW_METRIC flagged)
# ---------- END ----------
Step 5 – Score “Intent × Fit”
We build a simple ‘intent’ and ‘fit’ model. This is likely overkill!
How intent works? How hot is their need—right now?
Recency of the freshest job ad (ads this month beat ads from six weeks ago).
Variety of new-pain KPIs mentioned (talking about 4 new metrics beats talking about 1).
How fit works?How well do they match our sweet spot?
Company size—SMB and mid-market get higher points.
Tech hints—bonus point if their ad mentions Gmail/Sheets/Mailchimp (stacks we integrate with).
# ---------- INTENT × FIT SCORING ----------
For every company retained:
# INTENT_SCORE (1-5)
latest_ad_age → 0-14 d = 5 | 15-30 d = 4 | 31-45 d = 3
+2 if kpi_count ≥ 4 | +1 if 2-3
INTENT_SCORE = min(5, age_score + kpi_bonus)
# FIT_SCORE (1-5)
LinkedIn headcount → <100 = 5 | 100-499 = 4 | 500-999 = 3 | ≥1000 = 2
+1 bonus if ad mentions Gmail, Google Workspace, Mailchimp, or “spreadsheets”
FIT_SCORE = min(5, base_score + bonus)
Return sorted table:
Company | INTENT_SCORE | FIT_SCORE | ads_count | kpi_count | KPI phrases | Latest-ad date
# ---------- END ----------
Step 6 – Build Micro-Audience Cards
Now we group these clusters into our micro-audiences. We do this by converting company clusters of similar KPI phrases into campaign-ready brief.
# ---------- MICRO-AUDIENCE CARDS ----------
# 1 Cluster by shared NEW_METRIC phrase
# 2 Build a card per cluster (size ≥ 1 is fine for illustration)
For each cluster:
---
**Micro-Audience:** <give a punchy name, e.g. “Manual-Task Eliminators”>
**Signal:** <# companies> SMBs posted job ads this week mentioning
“<TOP_KPI phrase>”.
**Pain:** <first-person pain sentence framed from buyer POV>
**Hook:** <one-liner showing exactly how HubSpot CRM fixes that pain>
**Suggested Campaign:**
• **Primary channel:** <LinkedIn, outbound email, paid search…>
• **Creative angle:** <headline or subject line using the KPI>
• **CTA:** <mini-demo, calculator, or case-study download>
---
# Output cards only—no extra commentary.
# ---------- END ----------
Here’s how you could widen the net. It’s purely optional.
• Change the job posting date:past_45_days → past_90_days.
• Lower the cluster rule to include single-ad companies e.g. we exclude these
• Add extra discovery phrases (e.g., “increase sales velocity”).
• Re-run Steps 3-6—everything else stays the same.
Ok, that is a LOT, I mean a LOT, this is a novel.
This is just one ‘hunt’, you could run dozens upon dozens of them. Here’s another example:
Bonus Hunt Starter – “Stack-Synergy Signals”
What this extra hunt does: Finds firms that have two complementary tools (e.g., Calendly + Slack) but no HubSpot, signalling an integration gap you solve out-of-the-box.
# ---------- STACK-SYNERGY CONTEXT ----------
GOAL
Identify companies listing BOTH tools in any pair below but NOT HubSpot.
DISCOVERY_PAIRS
("Calendly" AND "Slack")
("Zapier" AND "Google Sheets")
("Intercom" AND "Stripe")
SEARCH_FILTERS
site:stackshare.io OR site:medium.com "tech stack"
date:past_90_days
-"HubSpot"
RULE
If page contains a DISCOVERY_PAIR and lacks HubSpot → tag STACK_GAP.
# DO NOT search yet—just reply “Context stored—awaiting next command.”
# ---------- END ----------
Go forth and hunt!
Until Next Time
Kieran