The IAS Threat Lab evaluates the impact of generative AI in digital advertising
Generative AI has been commanding conversations in digital media lately. While generative AI isn’t a new concept, it has recently caught fire due to the prevalence and effectiveness of chatbot AI platforms, like ChatGPT. The onslaught of this technology has moved several tech giants to create their own version of an AI chatbot — and has inspired malware authors and fraudsters to do the same.
The IAS Threat Lab evaluated the effects of generative AI on ad fraud, how it could speed up the prominence of ad fraud in the digital advertising industry, and how we continue to protect marketers from these emerging threats.
What is generative AI?
Generative AI is artificial intelligence that’s able to create text, images, audio, video, or other media. This technology works by learning the patterns of information or data that it ingests, and generating new, similar information.
Let’s take a look at some ways marketers could be fooled by fraud powered by generative AI.
Fake websites and falsified user agent data
Generative AI can create realistic-looking websites filled with fake content, including articles, reviews, product listings, and more. It’s also possible to have AI ingest legitimate content from outside sources and have it launder the content into seemingly original articles and news stories. These sites can then be used to host fraudulent ads and generate fake impressions.
It doesn’t stop there. Fraudsters can leverage generative AI to create websites that closely mimic legitimate publishers, fooling marketers into thinking they’re placing their ads on reputable platforms. While not strictly fraudulent, this tactic can also be leveraged in Made-For-Advertising (MFA) sites in which a website with seemingly legitimate content overruns the majority of viewable space with advertisements.
MFA sites can be created at scale by an individual with the help of generative AI. Due to the positive viewability and brand safety metrics of placing ads on these domains, these sites can fool marketers into believing they’re generating quality impressions — but in reality, these ad placements are low quality and a waste of ad spend. In fact, the Association of National Advertisers (ANA) reports that MFA websites represent a shocking 21% of impressions.
Generative AI can also falsify impressions by creating fake user agent strings, making it appear as if impressions are coming from legitimate devices and browsers. Fraudsters use AI models trained on vast datasets of real user agent data to generate plausible but entirely fake user agent strings. The data is then inserted into requests made by automated bots or scripts and allows for fraud at scale with the intention to bypass behavioral fraud detection.
Fraudulent user profiles and testimonials
Generative AI can create highly detailed user profiles. These fake profiles are often complete with demographic information, interests, and online behaviors with the intention of mimicking genuine user interactivity. Fake profiles often come hand-in-hand with AI-driven bots that can simulate user behavior, like mouse movements and ad interactions, to make fraudulent activity look natural.
Along with fake profiles, generative AI can be used to produce large volumes of positive reviews and testimonials for products or services, artificially boosting popularity and trustworthiness. This type of activity is seen consistently on major retail domains, video streaming platforms, and financial coverage websites. They tend to be relatively obvious by having overly detailed information in their comment or review, and having an outlandish amount of “likes” that have been generated by bots.
How can you prevent AI-based ad fraud?
IAS’s ad fraud detection tools can help marketers mitigate the impacts of AI-based fraud. With advanced analytical, behavioral, and deterministic modeling techniques, IAS can detect and stop fraudulent activity powered by automation and AI.
In addition to fraud detection products, marketers should verify the authenticity of website content and reviews to identify potential fraud. Plus, marketers should conduct thorough vetting of publishers to ensure they’re legitimate, along with regular audits of ad campaigns, websites, and user engagement patterns to identify and address suspicious activity.
Marketers can also leverage IAS’s AI-driven MFA detection and avoidance product. Our MFA site technology improves transparency into advertiser campaign quality, identifies where spend is being allocated, and informs optimizations to minimize waste on MFA sites so marketers can take back control of their media quality and cut down on wasted spend.