The Importance of Proper Test Planning and Analysis
When planning your testing programs, remember to:
Ensure your sample sizes are large enough. Ensuring your sample sizes are large enough will give you confidence in reading the test results. Testing with inadequate sample sizes is not only a waste of time and money but could cause you to make an incorrect decision resulting in lost opportunity or lost revenue. Small sample sizes yield high variability in test results, and, therefore, you could find yourself easily misled by the results. If your testing budget is not large enough to test additional names per test panel, reduce the number of test panels … not the number of names tested.
Reverse test a marketing decision. When rolling-out with a new format or creative concept, direct marketers often fail to re-test the old promotional package against the new. Reverse testing these changes provides a marketer with valuable information regarding the performance of the new test package. When a marketing campaign is under-forecast, a reverse test will identify whether the fault/problem lies with the list or the promotion, allowing you to take proper corrective action prior to the next campaign. Without reverse tests how could you possibly know the corrective action to take in such a situation?
When analyzing your test results, remember to:
Assess the amount of error variance associated with your test panels. Too often, marketers only visually examine the difference between test response rates, not taking into account the error variance associated with them. All marketing tests have a certain level of error variance associated with them regardless of the sample sizes. Placing confidence bands around those test estimates will allow you to determine a range in which the true response rate can lie in a roll-out situation. Once the confidence interval is calculated you can run various profit calculations using the upper and lower bounds to determine the best and worse case profit scenario for the roll out. Did you know that a test of 5,000 customers with a response rate of 3.5 percent can actually be as low as 2.99 percent or as high as 4.01 percent with 95 percent probability?
<B> Perform statistical comparisons between test panels with a high level of confidence.<B> Never perform a statistical comparison between two test panels with a confidence level of less than 90 percent. Doing so greatly increases your odds of making an incorrect decision. Again, this could cause an incorrect action resulting in lost opportunity or revenue for your company. You might assume the new test panel has beaten your control when in fact it will perform worse in a roll-out. If there is little risk in erroneously concluding the test has beaten the control when in reality it has not, set the confidence level at industry standard levels (90 percent or 95 percent). If there is high risk in making this same incorrect decision, set the confidence level high (95 percent or 99 percent). Define risk in terms of cost. If the cost of the test panel in comparison to the control is the same, use industry standard confidence levels.
Whether direct mail, telemarketing or Internet based testing, all require proper test design and analysis. Direct marketers invest a considerable amount of time and money into the development and creation of product, format, offer, and creative concepts in the hope of finding a "winner." Don't overlook a winner or be misled by a false winner to try to save costs by testing fewer customers. Do yourself and your company a favor by considering ample test panel sizes a part of your overall test budget.
Perry D. Drake is the vice president of Drake Business Services Inc. His e-mail address is Perry_Drake@dbsincorp.com.