Hitmetrix - User behavior analytics & recording

Dispelling Some Market Testing Myths

Every marketing program has unrealized potential. With so many unknowns in the marketplace, opportunities are waiting to be uncovered. The problem is finding them.

Sometimes you need big strategic or process changes, but often the first step is to leverage your market knowledge and optimize your current programs. How? By testing, but testing faster, more efficiently and with more powerful techniques.

A good place to start is by dispelling a few myths.

Myth 1: Testing is not rocket science. Actually, it is rocket science. Though basic split-run tests first were developed in the early 1900s to increase crop yields in agriculture, advanced techniques developed since the 1940s initially were used to improve conventional and nuclear weapons.

As you can imagine, scientists had to learn a lot from a few tests with little room for error. After being streamlined and refined for market testing, these advanced techniques have been used successfully by a few industry leaders, but remain largely unknown by even the most experienced marketers.

Myth 2: Statistical calculations are unnecessary for most tests. In every test you must know your uncertainty. Statistics lets you quantify uncertainty. Therefore, you cannot analyze test results without applying basic statistical techniques.

One of the biggest mistakes in testing is seeing a change in response and assuming it’s because of a test variable when it’s just random variation. Here’s a simple check: Give your control and test mailings five key codes each; then graph all 10 response rates to see how much they vary. Do you see a big jump between each group of five, or do they all fall within the same range?

To ensure reliable results, you need to calculate sample size before every test. The general formula depends on three variables: your average response rate, how small a change you want to see and at what confidence level. Overall, the lower your response rate or smaller the change you want to see, the larger the sample you need. For example, with a 2 percent response rate, if you want to be 80 percent confident that you will be able to see a 20 percent change, then you need to mail nearly 40,000 test packages. If you want to see a smaller change of 10 percent, then your sample size jumps to more than 150,000. There are no shortcuts with sample size. Mail too few and your results are just a guess.

Myth 3: Small tests can give good directional data. This follows the sample size issue: if the difference is not statistically significant, then you are just guessing. “Directional” data means your guess may be a bit more than 50-50, but you still can guess exactly wrong. Most often, marketers use directional data to claim a successful test when there’s nothing there – with too small a sample size and/or too small a difference, they cannot draw any valid conclusions. Directional data tells you nothing at best and leads you down the wrong path at worst.

Myth 4: Changing one variable at a time is the only way to test. The scientific method is wrong. OK, it’s not wrong, but often misinterpreted. The key to the scientific method is that you must be able to isolate the effect of the variables you are studying to prove a cause-and-effect relationship. If a lot of things change together or inconsistently, then the effect of each variable cannot be isolated.

However, and this is a big however, with the right techniques you can change many variables at once and still isolate the effect of each.

These advanced testing techniques offer impressive benefits. For example, you can simultaneously test up to three dozen elements with the same sample size as an A/B split. And varied designs let you create a test to meet your objectives, whether testing 10 or 20 variables at once.

Recent interactions have included one that showed a dot-whack on a catalog cover had a much greater effect with a numerical message than with a text-only message. In another interaction, sales dropped when a “free trial” was promoted alone, but rose when the free trial offer included the product price. These benefits grow stronger as you add more variables within one test.

Myth 5: Test the big things. There is some truth in this myth. Everyone tests price points, new channels and programs, offers and lists because they are so important. Yet you also should cast a wide net for new ideas. You never will know what else is important unless you test other elements. In addition, you may find five or six changes with small effects that together produce a big jump in sales.

For example, instead of just testing one new envelope against your control, test every element of the mail package: envelope size, color, teaser, indicia and logo; letter layout, headlines, copy, graphics, price points and presentation, and offer; and inserts, interactive devices, call-to-action and reply tactics. Then you can find the optimal combination. Eliminate what hurts sales, add what helps and drop unnecessary costs.

In casting a wide net, you can find growth opportunities where your competitors never look. A separate A/B split may not be worth the cost to test one small variable, but cutting-edge designs let you test many of these small elements at little or no extra cost since sample size and run length are the same.

Myth 6: Testing seldom leads to more than incremental growth. Those who test incremental changes will achieve only incremental growth. Those who test boldly and broadly can achieve dramatic improvement rapidly. Since the more you test the more you learn, test many elements first. Then zero in on the few most important ones to test further. Also, test bold changes first. If you test a big change and it has no effect, then smaller tweaks likely will have no effect, either. If a large change is significant, then you can adjust the levels to pinpoint the best setting.

Testing won’t solve all your challenges, but testing – with efficient and powerful techniques – can squeeze greater profit from every marketing dollar. After dispelling a few myths, you can embrace the speed, flexibility and insights that present-day testing techniques offer.

Total
0
Shares
Related Posts