Marketers Get Some Machine Learning
Marketer vs. machine: no problem.
In the foreword to Eric Siegel's new book Predictive Analytics, Harvard professor Thomas Davenport says that PA is “essentially amoral, and can be used for good or evil.” He's noncommittal about which side of that divide marketing falls on, as is Siegel, whose chronicles of wielding the “power to predict who will click, buy, lie, or die,” as the book's subtitle suggests, are peppered with amusing anecdotes and quotes from the likes of Twain, Brecht, and Meatloaf. While waiting for a recent flight to board, Siegel, founder of Predictive Analytics World and ex-computer science prof at Columbia, took time to answer our questions and calm marketers' fears about machine learning.
Marketers are afraid that cold, analytical robots will supplant their creativity or, worse, that they may have to go to night school and take computer science courses. Are these fears well founded?
No. Predictive analytics just expands and augments their skill set. But it does demand other skill sets they don't have. They need a colleague who has those skill sets. The place where predictive analytics applies most squarely is in predicting which prospect on your list is going to make the purchase and which one you need to cut from the list because he won't ever order. Look at today's prospect list with all the updated information and produce a predictive score with the chance you're going to have a pleasant outcome. PA is just numbers, but magical numbers that enable you to make decisions on one person at a time. So marketers need people who've done that before.
You compare predictive analytics to recycling. Can marketers lessen their trepidation by looking at it differently?
Most of the data people refer to as “Big Data” is data accrued as a side effect of business as usual. Data collected and not deleted. It's experience from which to learn. Every time you do a marketing campaign, you collect data and now you have history to learn from and use to predict behaviors the next time. The core science behind it is to define all the variables you'll need to score your customers on. The goal is to arrive at one number of [predictive value] for each person. With unstructured data--like a note scribbled by a customer service rep--you have to translate English into tabular data. The same goes for social data. One company found that if people change their phone service provider, their social contacts are seven times more likely than average to also switch.
You write about the decision tree as a trusty old prediction method. Is it outdated?
It's either the number one or number two modeling method at most companies, and it is still very valid. You can segment customers by building a tree upside down. You start with some simple condition like “bought three times in the last year,” then you add “lives here and has this salary,” and you keep adding—and, and, and, and. It' very transparent, anyone can look at it and understand it. There is, however, a lot of math behind the derivation of those rules you set in the predictive model. Research labs call this machine learning. It's about inferring an unknown. More advanced models can assign each individual a unique score. There are pros and cons to this level of complexity. There's less transparency and when you increase the complexity of the model, the return on it varies. One of the most popular complex models is constructed of combining a bunch of simple models. Often times it's a simple idea that has a very elaborate outcome.
Tell us about Eric and Karrie and the Anxiety Index and what they can teach marketers about venturing into uncharted territory.
They were university researchers who were trying to measure behavioral trends by what people wrote in blogs. What was amazing first of all was that they able to automatically label blog entries as to which ones connoted anxiety and which didn't. They came up with a method that detected about half of the anxiety-ridden blogs, but rarely labeled a non-anxious as such. They weren't trying to predict the stock market when they started this, but that's what happened. They were looking for a way to validate their Anxiety Index and so they went backwards and found that if anxiety rises today, the Dow Jones exhibits downward pressure.
Do direct marketers try to predict the wrong things?
In general they predict the wrong thing because it's easier and often pretty effective. You do a marketing campaign and look at the response rate. You look at purchasers and who they are and why they responded and if that's what you care about, that's what you should predict. It's response modeling and it's been used to good effect for years, but you have to ask, “Which of these people who bought would have bought anyway even if we didn't contact them?” The only way to do it is to have a control group. Then you could predict who you had an impact on. It's uplift modeling. The Obama people call it persuasion modeling.
Will Barack Obama go down in history as the best marketing president?
His campaign had a team of over 40 analytics experts. What they did was predict persuasion. If I sent you an email or knocked on your door and you told me you were voting for Romney, I'm not going to be able to persuade you otherwise. Or they may be undecided but I might annoy them and trigger the opposite response to the one I want. They noted those people and suppressed them from their lists. Nobody knows if this was the deciding factor [in Obama winning reelection], but it was a significant factor.