In the search for insight, how relevant is the “size” of the data?
Big Data is like the Gold Rush of the wireless age. With mounds of market data and powerful equipment, we can find valuable nuggets of insight wherever we look. But more of the same doesn’t guarantee new insights. Big Data analytics are reactive: You wait and hope for enough of the right data to show you a new way to find customers, improve promotions, and create a better product.
In fact, Big Data has three drawbacks: important elements of the marketing mix can get lost in the noise; correlation doesn’t mean cause-and-effect; and, most important, back-end analytics will never identify things you’ve never changed or tried. If your price has never changed, then “price” will not stand out as an important variable; if you want to know the impact of a new piece of CRM data or an in-store display, history may be a poor guide. Ultimately, you need to test.
The Cinderella of statistics
Small Data testing answers the challenges of Big Data. In-market testing is the only way to prove what works on the front lines of the marketplace—with the minimal amount of data needed to see statistically significant changes. The Cinderella of statistics—testing—is often overlooked and misunderstood. The simple Scientific Method ignores an entire field of research referred to as “design of experiments,” a collection of techniques to test many variables at once, but in an organized way that allows you to separate the impact of each (the basis of multivariate testing). Over the past 14 years these complex statistics have been further refined and streamlined for the marketplace, requiring even less data than A/B tests. From the dusty tomes of academia, the forgotten stepchild of Big Data has quietly been transformed into a lithe, powerful, and beautifully efficient expert to help pinpoint new opportunities and accelerate learning.
The general concept of multivariate testing in reality encompasses a full toolbox of “mosaic” test designs and strategies that offer the freedom to test more variables in depth with greater speed and accuracy, using a surprisingly small sample size. It’s like a scientific way to quiet market forces long enough to clearly hear customers as they speak with their wallets. With minimal—but the right—data you can see many small market changes and complex relationships that years of Big Data number crunching will never show.
Consider this example: A marketing team came up with 26 elements for two separate multivariate tests. The tests ran online for three weeks, totaling about 12,000 orders. Results identified 10 important elements (plus interactions) that achieved a 16% increase in conversion and 37% jump in revenue upon rollout. A/B tests would have required 35 weeks for equal confidence and Big Data analytics would have revealed only three important variables, since no other variables had previously been changed.
But Small Data testing takes work. You need to tailor your marketing programs to create the changes you want to measure. You may need to start the test design, creative work, and execution plan weeks earlier than normal. You may need to pay for new packaging, in-store displays, or signage. Yet with the right techniques tests can be completed in weeks instead of months or years. With cause-and-effect results, you can confidently rollout and quickly quantify your ROI. So Big Data offers quick and easy (and passive) answers, while Small Data testing requires (proactive) upfront effort to gain valuable truths.
Mine for clues, test for proof
Testing is a proactive way to answer specific, well-defined questions. Big Data analytics is a way to sift through everything else to search for clues and correlations. You can’t control who visits your store, so you create statistical models to try to learn more about them. You can control what, when, and who you mail, so CRM tests offer fast answers. You can control your store layout, so retail tests can be highly profitable.
Big Data gives you answers to what variables have potential, but testing gives you the truth: It shows what, on its own or in combination with other elements of the marketing mix, has a direct impact on response, sales, and customer retention. As one visionary direct marketer has said, “Perhaps one time in fifty a guess may be right. But fifty times in fifty an actual test tells you what to do and avoid.” That was Claude Hopkins, back in 1927. How much more could he—and we—achieve today?
Gordon H. Bell, LucidView
President of LucidView and managing partner of the Artestry collective, Gordon H. Bell has been publishing his views on the science and practice of in-market testing—especially multivariable and mosaic testing—for 15 years. The Knoxville resident has lectured at Yale University and Wharton, coauthored statistical papers in the International Journal of Research in Marketing and Interfaces, and contributed case studies to the eighth edition of Successful Direct Marketing Methods. When not engrossed in statistics, Bell enjoys a plethora of activities including hiking, skiing, international travel, sneaking books from his poet-professor wife, and keeping abreast of the world of fairies and mermaids with the youngest of his four children.