Stay marketing-savvy and tech-savvy. Get the latest in martech by subscribing to MarTech Today.
How bots ruin on-site experiences for real humans
Contributor Liad Agmon explains why the problem of fake web traffic extends far beyond advertising.
From the rugged alleys of Sweetwater to Facebook feeds filled with fake news, the impact of bots has never been greater.
As online advertising is projected to hit $77 billion in 2017, bots are directly responsible for fraudulently taking tens of millions of dollars from marketers every day, undermining fundamental trust in the industry. The meteoric rise of fake traffic prevents an existential challenge to demand-side platforms, agency trading desks and other vendors across ad tech to prove the basic integrity of their product.
Furthermore, 75 percent of publishers admit to being unable to differentiate between bot and human traffic. Three-quarters of media outlets are selling digital audiences — the fundamental lifeblood of their businesses — without any way to prove they are real.
But bots do far more than just spend their lonely days clicking ads. They also spend time at each level of the purchase funnel.
Taking back your data
Marketers commonly think of bots in terms of wasted ad impressions and clicks, but fake traffic has a much broader impact on all stages of conversion. Traffic from bots can lead to subpar on-site experiences for real users by skewing the data that marketers rely on for optimization.
For example, the most common optimization activity susceptible to bot intervention is A/B testing. In theory, running A/B or multivariate tests and dynamically allocating traffic to the highest-performing variations should be a relatively straightforward process. But if non-human clicks overwhelmingly favor a less popular variant, the experience that real consumers prefer will appear not to work.
In order to accurately personalize and optimize their websites, marketers must be able to trust that all visitor actions come from real consumers. But beyond the tried and true practices of filtering Google Analytics and other data sources for bot traffic, how can marketers focus on optimization strategies that deliver insights on the behavior of real humans?
It starts with a humble realization of the types of data that can be impacted by bots. Essentially, any on-site click can come from a bot — so any data source that looks at clicks only must be taken with a sizable grain of salt. This includes pages visited and clicks on recommended products or content. The “power user” who looks at every item in your catalogue might not even have a bank account.
But even as bots become more sophisticated, some behaviors, such as actually completing a purchase or paying for media subscriptions, indicate true human behavior. So optimizing your testing strategies by choosing a KPI that indicates genuine human behavior — rather than, say, click-throughs — will help you create a richer data set based on the online activities of genuine users.
Make no mistake: There’s a small efficiency tradeoff. Revenue-based optimization requires far longer experiments than CTR-based tests and is more complicated to attribute. However, over time, revenue-based optimization shows real buyer patterns and can filter out some of the random actions made by bots.
The impact of bots is felt in every corner of the internet. So many bots now follow me on Twitter that I have a good chance of becoming their leader. Marketers who want to ensure that they’re serving experiences that engage real visitors must recognize that the impact of these bots extends far beyond online advertising — and tailor their strategies accordingly.
Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.