When everything can be generated, trust matters more than ever
A product image that looks real but not quite right.
A headline that sounds convincing until you pause and read it again.
Most people recognise that moment of hesitation. And often, that’s enough to change behaviour: scrolling past, closing a page, or quietly questioning the brand behind it.
Now imagine that reaction happening repeatedly, across multiple campaigns and channels.
For years, digital advertising has prioritised scale and speed producing more content, more quickly. But as generative AI becomes widespread, consumers are starting to ask a more fundamental question: Can this be trusted?
A Shift in Consumer Expectations
We are seeing a growing gap between content and trust. According to IAS Consumer Research (2024), 56% of consumers are concerned that AI makes it easier to create misleading content.
In a digital environment saturated with AI-generated material, relevance alone is no longer enough. Consumers are increasingly evaluating the credibility, authenticity, and origin of what they see.
For brands, this creates a clear challenge:
- Those that lack transparency or responsible AI practices risk losing attention and loyalty
- Those that prioritise trust through clarity, honesty, and ethical use of AI can stand out and build stronger, longer-term relationships
How AI is Changing Discovery
Search behaviour is evolving rapidly as AI-powered interfaces become more prominent.
Many industry experts expect traditional click-based traffic to decline over time. At the same time, AI-driven discovery is proving highly valuable: visitors from AI-powered search are, on average, 4.4 times more likely to convert than those from traditional organic search.
As the path from discovery to engagement becomes less predictable, where a brand appears matters just as much as what it says.
Being placed next to low-quality or misleading content can negatively influence perception even if the ad itself is appropriate.
Why Traditional Brand Safety Approaches Are Being Challenged
Many existing brand safety strategies were designed for an earlier version of the internet.
Tools such as keyword blocking and static exclusion lists can struggle to identify more subtle risks, particularly content that appears credible on the surface but lacks accuracy or substance.
At the same time, AI is reshaping search itself. IAS AI B2B Study (2025) reports that 61% of media experts believe AI-powered search features will reduce web traffic and clicks.
As a result:
- Discovery journeys are becoming more fragmented
- Direct traffic is less predictable
- Traditional verification and optimisation methods are under increasing pressure
There is also growing demand for clearer standards. 68% of media experts believe digital inventory should be labelled – such as “human-created” or “AI-audited” – by independent third parties.
From More Data to Better Decisions
In complex, AI-driven media environments, the challenge is no longer access to data: it is making confident, informed decisions.
The focus is shifting towards:
- Interpreting data clearly
- Applying it effectively
- Ensuring automation supports, rather than undermines, performance
The objective is not simply more information, but greater confidence and control.
Managing Complexity Without Losing Oversight
As campaigns become more sophisticated, teams often face information overload, alongside concerns about:
• Lack of transparency in AI systems
• Increased brand risk
• Declining content quality across the open web
At the same time, adoption is accelerating. IAS AI B2B Study (2025) shows that 70% of media experts consider AI a top strategic priority for 2026.
However, prioritising AI is not the same as using it effectively.
To succeed, organisations need solutions that:
- Enhance human expertise
- Maintain transparency
- Provide clear, actionable recommendations
Supporting Smarter Decisions with AI
New approaches, such as the IAS Agent, aim to support faster and more informed decision-making while keeping humans in control.
These tools can help teams to:
- Identify meaningful trends more quickly than manual analysis
- Optimise campaigns in real time using AI-informed recommendations
- Maintain transparency through explainable and adaptable insights
This is not about replacing human judgement, but about augmenting it.
⸻
Protecting Performance in an AI Content Landscape
AI-generated content has also contributed to the growth of low-quality environments, including MFA (Made-for-Advertising) sites and cluttered ad placements.
To protect media investment, advertisers can:
Avoid low-quality inventory before bidding
Use pre-bid controls to prevent ads from appearing on identified MFA sites.
Evaluate the impact of ad clutter
Measure whether crowded environments deliver meaningful results—and adjust strategy accordingly.
By combining proactive controls with AI-supported insights, brands can reduce exposure to low-quality environments and focus on placements that drive genuine outcomes.
Looking Ahead
AI will continue to play a central role in media strategies moving into 2026. But speed and scale are no longer sufficient on their own.
The key principle is simple: automation is most effective when it reinforces authenticity.
Trust is not built through volume.
It is built, step by step, with every impression.
Share on LinkedIn
Share on X


