There have been some really shocking headlines in the last few months; including ‚Mercedes online ads viewed more by fraudster robots than humans‘, which ran in the FT in May. Being aware of the fraud issue within digital advertising means we have to move quickly to deal with it, and learn how to minimise the impact on campaigns.
A key part of the puzzle is how we measure and give credit for performance.
Currently, the way we do this is fundamentally flawed, and we need to move away from correlation-based last touch models that actively incentivise the fraud that we are trying so very hard to stop.
The definition of a fraudulent ad is one that never has the opportunity of being seen by a human. There are two broad areas we focus on: CPM fraud and bot fraud. The first, CPM fraud, involves unscrupulous publishers knowingly trying to defraud an advertiser. This type of fraud includes stuffing 1×1 pixels all over a page and serving a series of ads into those 1×1 pixels.
Impression stuffing is the layering of seven, eight, nine or 10 impressions on top of each other in an ad slot so only the top ad is visible. In the video space, we see similar types of behaviour where video players are being stuffed into 1×1 iframes, or videos looping right after the other without being shown to users. The second is bot fraud, non-human behaviour. This type of fraud exists where a machine has been taken over by a bot, and the bot gives that machine instructions to serve ads behind the scenes.
There are lots of these botnets out there generating millions of ad impressions on a daily basis that have no opportunity of being seen by a human. We look at behavioural patterns and activity on infected machines; we can differentiate whether the signals come from a bot or a human, and we can block ads from being served to these machines.
Given the scale we deal with in the industry, manual processors can’t find fraudsters alone. They’re too smart and they move too quickly, so you need to leverage tools to help you identify and rid your exchange, network or campaign of fraud.
As well as blocking ad fraud when we see it, we need to disincentivise those who commit fraud. Currently, the way we measure performance online is ineffective; the industry uses correlation-based models. Was the last touch associated with this conversion? If so, the publisher should get credit.
But just because I saw the ad last doesn’t mean it’s the cause of my conversion. David Hahn, our SVP Product explains why: “We need to move to causality as a performance indicator and not correlation. One of the things we work on with our buy-side clients is how to derive causality for these campaigns. If I’m being measured on last touch, I have an easy way to play the system. That is exactly how the fraudsters are winning.”
Take the example of three publishers on a campaign. Publisher one serves 100,000 impressions and it’s a direct premium publisher with almost no fraud on the campaign. Publisher two serves 500,000 impressions and half are fraudulent. Publisher three serves a million impressions and three quarters are fraudulent.
If you’re using last touch or last click attribution, chances are publisher three will wind up with some type of correlation-based conversion, because it is serving so many more ads, and a lot of those last touch values will be derived from some of the fraudulent impressions they’re serving. If you’re calculating attribution based on causality as opposed to correlation, any impressions served by publishers two and three that were fraudulent would be automatically eliminated from the possibility of converting.
Meaning publisher three would only have 250,000 impressions that could potentially count toward attribution, versus a million. Recently, we saw a DSP client of ours optimise around what they thought was a viewable impression. In fact, it was a viewable impression being served by a bot, which the DSP counted as valid. And the vendor they were using – not us – was measuring it as in-view and optimising around it.
However, it was fraud. The performance of the campaign never improved, but the DSP thought they were doing a good job optimising around viewability. They were really optimising around fraud. It’s clear that if you just look at correlation-based metrics, you’ll never derive true performance around a campaign, and we will never remove ad-fraud from our digital buys.
Read more here.