Maintaining a Scientific Approach to Direct Marketing
MJ Crabbe-Barberis, principal CRM business consultant, Infor
As learned in any introductory college-level direct marketing course, leveraging a scientific approach to direct marketing campaigns is the “right” thing to do for direct marketers.
Unfortunately, as the marketing world has become more complex with ever-expanding channels, data sources, and technological capabilities available to marketers, it seems that the basics are being left behind. It's as if these direct marketers are counting on the sophisticated system capabilities to work their magic. Or, perhaps they're just overwhelmed and intimidated by the technology and masses of data. We've all heard the adage, “garbage in–garbage out.”
The problem is that the sophisticated systems and models do help camouflage poor marketing basics; they can improve campaign results even when the basic scientific and sound direct marketing principles aren't followed. If that's the case, just think of the benefits that companies can attain if they combine the two: leveraging a solid scientific approach with good direct marketing practices (which may not be scientific, per se) and sophisticated technology.
I'll summarize a few key direct marketing oversights I've seen across a variety of industries and companies around the globe.
Test and learn: trying to test more than one variable at a time
A/B testing is the most common use of scientific testing principles in direct marketing. When done correctly it enables the marketer to glean valuable behavioral data that can be used to improve performance of future direct marketing campaigns. My observations have noted the following common errors:
- Not starting with a premise to test. Strategic thought must be given to what you're trying to learn. You should start with a Champion creative (which may or may not be well-performing, but is your baseline).
- Changing more than one element in the test. Select only one variable; this is critical. If you change multiple variables such as copy, layout, target audience, and delivery timing all at once in the Challenger package, it'll be impossible to distinguish which variable had an impact (positive or negative).
- Not having a testing roadmap. Create a strategic vision for testing whereby you systematically change one element at a time and build off of your learning for each subsequent round of testing. If the Champion gets beat by the Challenger, make the Challenger the new Champion and carry on with your testing approach bit by bit. Yes, you may modify what you test in each round based on results and a new premise you want to test out. In doing so you can continue to improve your campaign results.
Calculating lift: control group errors
It's a fact that direct marketers are often pressured to forgo solid control methodologies because controls reduce the population being targeted. This pressure to target as many eligible consumers or businesses as possible results in more immediate response volume, as opposed to more response volume in the future. It's the equivalent of being “penny-wise and pound foolish.” A scientific approach dictates that one is always learning and looking to improve future performance. Here are some ways to accomplish that via control methodologies:
- Hold out statistically relevant control populations. To do this you have to make a judgment call about your response rate. You want to ensure that you'll have enough response data in both the control and the test group so that you can calculate a statistically relevant response lift.
- Run the calculations. There's a calculation that can be run by your statistics gurus to tell you what percent needs to be held out to get a certain percent statistically accurate comparison point. That percent is up to you. What'll your company accept? Eighty-five to 95% accuracy is widely accepted. The higher the percentage relevance you demand, the more control data you'll need. There's a balance between complete accuracy and over-sampling.
- Not managing control groups across campaigns. Ah, the silo. It doesn't take much thought to realize that with all of the various marketing channels out there today people are going to be targeted for the same product or offering via more than one channel. And, if you're managing the marketing by channel, it may very well be true that someone could be in a control group for one campaign on one channel but be in the target population for a campaign for the same product on another channel. This should be anticipated and can be managed on the back-end if the control data is shared. In addition, consideration can be given to the next observation.
- Consider holding out a universal control group. Arguments regarding attribution of results are common in most companies. What really motivated the response? Was it a newspaper ad, billboard, email, letter? Was it the last communication on said topic? All of the above? Some companies have tried to address this dilemma by holding out a universal control that's a population that won't be targeted at all. The individual controls should still be leveraged if this strategy is adopted. A downside is that you'll again be reducing the marketable population across the company, which tends to be undesirable. A business case may have to be developed to show that the value of the universal control in the long-term (systematically improving campaign results over time) will outweigh the negative of not marketing to the entire population. A universal control can also be a great way to isolate the effect of advertising and non-direct media overall.
While these are just two common oversight categories, others exist as well. As you move into the New Year, make it your resolution to leverage the scientific approach.
MJ Crabbe-Barberis is principal CRM business consultant at Infor