The holy grail for nearly every company in every category is customer loyalty. Very often, I find myself describing our process when a potential client stops me with the words: “We already have loyalty measures.” Invariably, the client is an intelligent and seasoned top-ranking executive of a major corporation. And, invariably, the intelligent and seasoned executive is dead wrong.
There is a simple reason why these top executives – most top executives – do not know what they are talking about when it comes to brand and customer loyalty measures: Their research departments are misleading them and have been for years.
It is not that corporate research directors are consciously lying. The real problem is that the “brand loyalty” data cited by research directors do not have much to do with actual customer loyalty. That is because there is no way to measure customer loyalty using the traditional methods favored by the research departments of just about every major corporation. These methods cannot measure customer loyalty because traditional research uses the direct question-and-answer approach: direct answers to direct questions. And there is no way to determine through direct questioning, with a reasonable degree of statistical accuracy, whether customers will remain loyal to a brand.
Take a look at the traditional direct questions, the questions on traditional surveys that are supposed to capture brand loyalty information. “Do you plan to buy this product again?” “Would you recommend this product to a friend or colleague?”
It is not a methodology issue per se. All legitimate researchers are asking their questions properly, collecting data accurately and crunching the numbers according to Hoyle. The problem is that the questions themselves are not good. They do not elicit truthful answers. Sometimes people shape their answers to give you what they think you want. Or to make themselves look good. Or in the hope that their answers will influence their own future behavior.
Regardless of why, it remains that answers to direct questions are not predictive of future behavior. They do not correlate to any notable extent with real marketplace activity. In other words, traditional research methods yield statistically reliable and valid answers – to meaningless questions.
There are two reasons why top executives will not be hearing this from their research directors. First, most research directors do not know any other way to get loyalty data, nor do they pay attention to the lack of correlation between their data and the actual purchase behavior of their customers. Their consciences are clear when they say they have loyalty measures, because it is what they believe. As the psychologist Abraham Maslow said, when your only tool is a hammer, the whole world starts looking like nails.
The second reason is slightly more sinister. Imagine yourself in the position of a research director who does actually understand that the loyalty data he has been serving up for the past two decades are incapable of doing the one thing you would expect a true loyalty measure to do: predict future customer behavior. Would you want to admit it? Or would you tend to hunker down and insist to the higher-ups that everything is just fine on the loyalty front? This is compounded since most executives look only at bottom-line, single-item research results (such as awareness numbers), paying no attention to the methods that produced those results. Even the most hands-on executives generally keep their hands off the research department, the theory being that such arcana are best left to the experts.
However, the experts in your research and planning departments are still relying on methods that were developed during the Eisenhower administration. The ability of these methods to track the direction and velocity of fast-changing customer values? Nil. Their ability to predict whether your customers will remain loyal or be peeled off by the competition? Don’t ask. (Even more pathetic are the Internet companies – many of them “dot-gone” by now – that eschewed even traditional research, opting instead to adopt the theory that counting clicks was a good way to gauge loyalty and profitability. Or those that decided, without any empirical evidence, that high measures of “awareness” – the kind you get, for instance, when you spend millions to run a spot on the Super Bowl – would lead to online loyalty.)
The solution? You can start by calling in your research people and grilling them about how well their current brand loyalty numbers are matching up with sales as well as with your bottom-line profits and stock price. Your researchers will protest. They will howl. But, ultimately, they will roll over and admit that they do not have the goods. Then tell them to get busy figuring out where to go for customer loyalty studies that are valid, reliable and correlated with your customers’ behavior in the marketplace.
That holy grail you thought you had? With all due respect, it is nothing more than a leaky plastic mug.