The prompting techniques I still use in 2026
Prompting still matters in 2026. Here are my best techniques
Prompting was incredibly popular in the early part of 2025. AI influencers pimped out their prompts across social media in return for a huge amount of engagement.
But then the models got better, and prompting wasn’t such a hot topic for the average user.
The reality about prompting in 2026 is this - give the model better context, not better instructions.
The 2026 models are smart. Really smart. You don’t need a lot of complex prompting techniques. You don’t need to be combing through Reddit forums looking for prompts to impress your co-workers. You mostly just need to give them great information, examples, and have a couple of powerful techniques to maximise your results.
I worked across ChatGPT, Gemini and Claude to ask what prompting techniques I use that are most powerful for the current models. Here are the 5 that are best for 2026
1. Show, Don’t Tell
This is the single most powerful prompting technique, and it’s been true since the origins of GPT-3.
Instead of writing a paragraph explaining what you want, just show an example.
The weak way:
Write me a LinkedIn post. Make it punchy and conversational. Use short sentences. Start with a bold hook. Include a clear takeaway at the end. Don’t use corporate jargon.That prompt is 40 words of instructions that the model will half-follow.
The better way:
Here are 3 LinkedIn posts that got a lot of engagement for me last month:[paste example 1][paste example 2] [paste example 3]Write a new one about AI agents for solopreneurs in the same style.Now, the above is to illustrate the technique. Do not have AI write your LinkedIn posts. Please!
But, always find examples that work, paste them in, and ask for more like that.
2. Feed Real Data, Get Real Insights
One of the most valuable lessons about prompting is: the prompt isn’t the magic, the data is.
If you ask Claude to “write objection-handling copy,” you’ll get generic marketing speak. If you feed it actual objections from your last 5 sales calls, you’ll get copy that sounds like your customers wrote it.
If you ask Claude to “write a prospecting email,” you’ll get generic outreach that sounds like every other AI-generated message in their inbox. If you feed it real information about the prospect, you’ll get something that actually sounds like you did the research and crafted something just for them.
Generic prompt (weak):
Write a cold email to a VP of Marketing at a B2B SaaS company pitching my analytics tool.Data-driven prompt (powerful):
I’m reaching out to Sarah Chen, VP of Marketing at Lattice.Here’s what I know about her:- She’s been in the role for 18 months (LinkedIn)- She recently posted about struggling with attribution across channels- Lattice just raised a Series D and is scaling their marketing team- Their G2 reviews mention “reporting” as a weakness customers citeWrite a cold email that connects my analytics tool to her specific situation. Reference something real, not generic pain points.You’ll see a big difference in results. One email will sound like spam. The other will sound like you actually did the work.
The better the data you feed AI, the better the results. For prospecting, especially into large accounts, you could even run a deep research project on the account and the core buyers, and use that data to better tailor the output.
Spend more time gathering your data vs. constructing your prompt.
3. Ask Open-Ended, Not Yes/No
This one comes from research on using LLMs to simulate customer feedback. Turns out, when you ask models to rate things on a numbered scale (1 to 5 / 1 to 10), they default to safe, middle-of-the-road answers. Mostly 3s. Not that useful.
But when you ask open-ended questions? You get realistic, nuanced responses.
Don’t do this:
Please review this headline and rate it on a scale of 1-5.Do this instead:
You’re a VP of Marketing at a mid-size SaaS company. You just saw this headline in your inbox:“Cut Your CAC in Half Without Cutting Your Team”What’s your gut reaction? Would you open this email? Why or why not?You’ll get a real answer, with good commentary. “I’d probably open it, but the claim feels too good to be true. I’d want to see proof fast, or I’m out.”
You can make this more actionable by providing it with unstructured data of your customer (VPs of Marketing) and asking it to mimic them, e.g., adding data (see number 2). e.g. adding call transcripts, chat logs.
You can also do external research to gather that context.
4. Make It Edit Itself
Here’s the simplest technique that most people skip: ask the model to critique its own work.
First drafts from AI are fine. Second drafts are better. Third drafts, after self-critique, are actually good.
Simple version:
Write a cold email for [product] targeting [persona].Now critique your own email. What’s weak? What would make someone delete this immediately? What’s missing?Rewrite it based on your critique.You can run this as a loop - generate → critique → revise → critique again → final version. One thing I do is I run the loop across models so give the output to another model, ask it to critique, and then provide that feedback to another model and so on.
Also, I will say, the hidden gem in this prompt is ‘what’s missing’.
I also ask the AI to tell me what I haven’t thought off, how it would make something better. Gold.
5. Store Your Context in Memory
It sounds like persistent memory is coming to Claude. See this tweet. That means Claude will store and keep things it feels are relevant so it can recall them in future chats. This will be a game-changer.
But, using the current version of Claude memory is so good for improving your results.
I store foundational things in my memory.
My working style
How I like to collaborate
The audience I write for (helps a lot with research)
My blind spots, so Claude is always aware of them
I simply tell Claude to store whatever it may be in memory. You can always validate it’s stored by asking in another chat what it knows about X.
You’ll find that the output of your prompts becomes increasingly tailored for you.
Until Next Time,
Happy AI’fying
Kieran



Regarding the topic of the article, 'Show, Don't Tell' is so right. Context over instructions is clearly the future. Multumesc for this useful breakdown!
Super helpful. Thanks for sharing Kieran!