The Need For People In Marketing Automation

This article was first published on Brand Republic here

Marketing automation is brilliant. We all know why automation is brilliant: it saves us time, looks at everything in more detail than we can and makes more accurate decisions. We also know why it’s dangerous. We’ve all seen Amazon sellers who use pricing scripts to stay one penny cheaper than the competition, working against another identical scripts in a race to the bottom for sales at prices they can’t honour. 

It’s not just a safety issue either. Rogue algorithms can be kept in control, but what should those algorithms even do?

We typically use automation either to help us manage bids or report more efficiently. Bid management is a mature area that can ingest and process a lot of data to help make predictions about tomorrow, based on what happened yesterday. Multiple date ranges, market trends, seasonality and more all feed into a good predictive algorithm, especially when we understand the place it fits within a wider pattern.

Should we sacrifice a good keyword’s performance to spare budget for a wider range of lower volume but better ones? How far can they be pushed? Where is the point of inflection? Good tech helps answer those questions and make decisions.

But what decisions need to be made? That’s a good question the tech can’t answer. There is so much data that could be looked at before making a decision, whether on bids, ads, landing pages, content or more. The more data, the more dangerous data mining becomes. As potential correlations increase, the total number of combinations increases. More combinations mean more false positives.

False positives can be stronger correlations than real connections

The strength of correlations between random datasets is typically distributed over a normal curve (an average of averages). That means strong correlations are unlikely unless they exist for a reason. But as you add more data points you increase the number of strong correlations, just by the law of large numbers. There’s nothing stopping those random correlations being even stronger than the real mechanisms/predictors you’re trying to spot.

Scientists used to think coffee caused cancer. It took a smart second look to determine that coffee drinking is correlated with smoking, and smoking causes cancer. Coffee didn’t.

Starting with the data is bad, so we need to bring humans back into the equation. What connections do we think there are? Why? If you can justify to me why two datasets might be connected (e.g. the impact of rainy days on conversion rates for beach holidays, for instance) then find the data is correlated, then I’ll believe you and decide it should be part of the bidding algorithm. A placement with poor conversion rates would typically be turned down or off by an algorithm, but a human can determine the cause and make campaign changes to adapt.

But now bidding is more complex than a single CPC bid on a keyword, or a CPM for a placement. Devices, locations, days/times, remarketing status… each of these offers an ability to modify our decisions: our bidding, our ads and our landing pages. As these modifying factors interact, things can get complex. We rely on automation to handle that complexity for us, but it makes planning and strategy more difficult.

The move towards increased automation is driven by increased complexity. Complexity also drives more reliance on human decisions, an understanding of biases, and an ability to reason “why”. So, as this trend continues, rather than removing humans from the media buying equation we see people becoming a more integral and more valuable resource.