The Biggest AI Launches of 2025 That Changed Marketing Forever
What actually changed for marketing teams this year
What an incredible year 2025 was for AI. It felt like every week, something transformative launched.
With all those launches, it’s easy to miss what *actually* changed for how we market and grow our business.
In the following post, we’ll break down the biggest AI launches in 2025 - the category, the AI products launched in that category, why it was so impactful for marketing, and how it’s being used by the top marketers.
1. Content Repurposing.
Key Launch:
Gemini 3 - Native YouTube video analysis that turns video content into multi-platform content assets.
Why This Matters for Marketers:
Gemini 3 turned YouTube into a treasure trove of content for smart AI-enabled marketers. Gemini 3 introduced native YouTube integration; simply paste any YouTube URL and the model processes the full video (up to 1 hour at default resolution, 3 hours at low res), analyzing both audio and visual elements simultaneously.
This isn’t simple auto-pasting the video transcript into an LLM. Gemini 3 scored 87.6% on Video-MMMU benchmarks (a test measuring how well AI models understand and apply knowledge from professional videos across multiple disciplines), meaning it understands context, narrative flow, and visuals. A marketer can paste a competitor’s product launch video and ask Gemini to identify their positioning strategy, key features, target audience and content the brand should create to counter the claims made in the launch video; easily turning that analysis into a blog post, LinkedIn thread, or email outreach campaign.
The Marketing Impact:
YouTube is now an incredibly rich source of content ideas that marketers can easily repurpose into other assets.
I gave an example here on how to extract viral talking points from YouTube that can be easily turned into content across all social sites
2. Image and Visual Creation
Key Launches:
Nano Banana Pro (Gemini 3 Pro Image) - Google’s state-of-the-art image generator with text mastery
ChatGPT Images (GPT Image 1.5) - OpenAI’s 4x faster model with precise iterative editing
Why This Matters for Marketers:
a. Nano Banana Pro was a huge leap forward for text-to-image models. It solved the text-in-images problem that plagued every previous AI image generator. You know the problem, the one where you asked for text within the image, and the output was just garbled nonsense.
With Nano Bannana PRO, marketers can now create infographics, content assets for events, and the perfect mockup to show a designer what they need, vs. handing them a dusty marketing brief.
Nano Banana isn’t just an image model; it connects to Google search for real-world accuracy, meaning if you had an infographic of the best things to do in Dublin, it will include real facts and location instead of hallucinating details.
Marketers can now upload up to 14 reference images (logos, color palettes, product shots, brand guidelines), and Nano Banana Pro maintains consistency across every asset. This allows you to give Nano Banana content assets that showcase your brand identity, and the model can apply it automatically to any visuals it creates.
b. ChatGPT Image 1.5 solved the image iteration problem for marketers. You know the problem, the one when you ask ChatGPT to make a small edit to the image, and it completely regenerates the image. Ask to change the lighting, and you’d get a completely different pose, facial expression, and composition. GPT Image uses partial regeneration to regenerate the specific parts you ask to change.
It does this by treating images and text as the same type of data (tokens) and processing them within a unified architecture. When you say, “change the sweater from red to blue, keep everything else the same,” the model identifies which pixels represent the sweater and modifies only those, leaving facial features, lighting, and the background untouched.
This breakthrough allows marketers to iterate on an image, making slight tweaks rather than fully regenerating it each time an edit is requested. It gives marketers much more capability to create image assets.
The Marketing Impact:
a. Infographics for Selling: Nano Bannana PRO’s progress in text-to-image means you can do things like use infographics as sales enablement content assets. For example, imagine Disney were a prospect of HubSpot’s, here’s a sample prompt:
Research how many streaming titles Disney+ launched in 2024 and their total subscriber count. Create a professional infographic titled 'Scaling Content Marketing at Disney's Volume' with three sections: (1) The Challenge - show their content scale with icons for Disney+, Hulu, ESPN+ and title numbers, (2) The HubSpot Solution - show 'Automated cross-platform scheduling', 'AI-powered content personalization', 'Unified analytics dashboard' with icons, (3) The Impact - show '60% faster campaign launches' and 'unified view across 3 platforms'. Use Disney blue (#113CCF) and clean corporate design.
b. A/B test ad creative at scale with ChatGPT Images: ChatGPT image 1.5 allows you to create multiple iterations of a single ad. Upload your product image, then: Prompt (replace with your titles):
“Create a 2x3 grid showing 6 versions of this ad. Each version uses the same product image and layout but different headlines: ‘Cut Support Tickets by 40%’, ‘Resolve Issues 40% Faster’, ‘Your Support Team’s AI Assistant’, ‘Automate 60% of Replies’, ‘24/7 Support Without Hiring’, ‘Answer in Seconds Not Hours’. Same CTA button ‘Start Free Trial’ on all. Clean grid with subtle dividers, professional B2B style”
3. Video Generation
Key Launches:
OpenAI Sora 2 - Public launch with synchronized audio generation (dialogue, sound effects, ambient sound), up to 20-second clips at 1080p, plus new iOS app
Google Veo 3.1 - Native audio generation integrated with video, up to 60 seconds at 1080p, integrated into Google Workspace
Why This Matters for Marketers:
Video production is expensive and has been out of reach for the average marketer. Sora 2 and Veo 3.1 collapsed that entire production pipeline into a text prompt.
Sora 2 and Veo 3.1 deliver the full audiovisual experience in one go, with natural dialogue, proper lip-sync, footsteps that match walking pace, doors that actually sound when they open, and so on.
Sora 2’s big PR moment was pairing the model launch with a new iOS app and social features, making AI video generation mainstream rather than just an API.
Veo 3.1 took the enterprise route, integrating directly with Google Workspace (Flow and Google Vids) so marketers can generate video assets within their existing workflow. Veo 3.1’s 60-second clips and editing capabilities, adding/removing objects, extending scenes, maintaining character consistency, give marketers production-level control without production budgets. Here’s one of my favourite examples: a dentist in LA made this video to advertise his dental practice and booked his practice out for a year.
The Marketing Impact:
These video models unlock entirely new marketing approaches that were previously impossible:
a. Hyper-personalised video campaigns at narrative scale: Like Nano Banana PRO, these models allow you to turn video into a sales tool. Prior to these models, it would have been inconceivable to build video campaigns around a single prospect. But those are the exact opportunities to seek out with AI, making the impossible possible. Using VEO 3.1, you can put your prospects’ exact pain points inside a video story made for them. Tutorial coming on VEO 3.1 with examples of this.
b. Turn your execs into creators: I’ve talked about this many times, we’re moving to the era of personality-led growth, where people consume content from individuals vs. brands. Sora’s 2 features allow you to scale your executive’s presence across your content assets. Record your CEO once with Sora 2’s cameo feature, then insert them into 50 different scenarios. What used to require executive time for 50 separate video shoots now happens with one 30-second recording and targeted prompts.
4. Automation & Workflows
Key Launches:
OpenAI Agent SDK (Swarm) - Framework for building custom AI agents that can orchestrate complex marketing workflows
Google Workspace Studio - No-code platform for building agentic workflows integrated with Gemini and Workspace
Why This Matters for Marketers:
We’ve heard all year that 2025 was the year of agents. I don’t think we saw its full potential, but there’s a reason the biggest companies are betting on platforms to build agentic workflows.
OpenAI’s Agent SDK lets technical marketers build custom agents that can handle multi-step processes autonomously, including agents that monitor competitor pricing, evaluate your position, and draft positioning responses.
Google Workspace Studio democratised this for non-technical marketers, offering a visual interface to build workflows where AI agents make real decisions rather than just following preset paths, all integrated directly into Gmail, Docs, and Sheets.
These are must-have skills for the new era of marketers (stay tuned for the full post on must learn skills in 2026).
The Marketing Impact:
There are far too many examples of agents you can build to help you do work. In the future, a single marketer will have a team of agents, allowing them to operate as an expansive team.
a. Competitive intelligence that actually acts: Turn competitive intelligence into a core part of your distribution strategy. Build an agent that monitors competitor launches, analyses their positioning changes, cross-references with your product roadmap, and automatically generates response briefs for your team.
b. Lead qualification that understands context: Deploy agents that don’t just score leads on demographics, they research the prospect’s company trajectory, recent funding announcements, leadership changes, job postings indicating growth areas, then route to sales. This is very much a standard use of agent swarms today.
5. Vibe Coding
Key Launches:
Replit Agent - Autonomous coding agent that builds complete applications from natural language
Anthropic Claude Sonnet 4.5 - Best-in-class for complex coding tasks and application development
Anthropic Claude Code - Terminal-based agentic coding tool for developers
Why This Matters for Marketers:
Marketers can now really build, ship a product, without engineering teams. As an ex-Software Engineer, I love this. Claude Code is number one on my list of skills to master over the coming months.
In 2025, we saw the growth of “Vibe Coding”, where you describe what you want in plain language and watch AI build it. Marketers can go from idea to a working application without relying on an engineering team to prioritise and build it.
Replit Agent builds entire applications autonomously. You describe a calculator widget for your pricing page that builds the front-end interface and backend logic, and deploys it. Claude Sonnet 4.5 and recently Opus 4.5 excels at the complex, multi-file applications marketers actually need. Think custom dashboards, data visualisation tools. Claude Code brings this power to the command line, letting technical marketers delegate entire coding projects to AI from their terminal. Think of a suite of internal software built for your personal needs.
The Marketing Impact:
Marketers can now build code-powered experiences. I’ve shown many of these examples over the past few months.
a. Interactive Dashboards: I showcased an example of an interactive dashboard I’m building using Claude here. I believe this allows marketers to go from static data to insights and action.
b. Content Apps: I showcased an example of a tool you can build to turn YouTube videos into viral content (above). I’ve started to build an array of micro-apps like this. The reality for marketers is that one AI-enabled marketer can now have a team of agents and a suite of personal software to help them do their work. How does a non-AI marketer compete with them? They can’t!
2026 is going to be a big year. There is a whole new set of skills to learn and scale.
Until Next Time,
Happy AI’fying
Kieran



