# How Big Should My Test Be

Recently, a veteran list broker recommended test quantities of as few as 5,000 to a direct marketer with prospecting response rates as low as 0.25 percent. Unfortunately, the resulting 13 responders would have been far from adequate to read the results of the tests.About the same time, a highly respected direct marketing consultant commented that test list quantities should be large enough to generate at least 50 responders. Unfortunately, this assumption is simplistic in its perspective.

A review of all the concepts behind good direct marketing testing is beyond the scope of this article -- things such as confidence levels and intervals, one- versus two-tail tests, stratified sampling, power testing, finite population correction factors, alpha versus beta misreads, and the interpretation of dollar versus response rate performance. Nevertheless, we will focus on a single formula to provide some groundwork for answering a question that I have been asked countless times as a direct marketing consultant: "How big should my test be?"

The short answer is, "It depends." (Bear with me, however, because things will become clearer.) For a given expected response rate, no one test panel quantity will be optimal for every direct marketer. The appropriate quantity will depend on factors such as the amount of money available for testing and the level of risk the direct marketer is willing to assume that the rollout response rate will differ significantly from the test rate.

However, I will outline how you can intuitively arrive at a well-considered conclusion. This requires a two-part statistical formula that every direct marketer should commit to heart:

*Part 1: (Expected Response Rate X (1 - Expected Response Rate) X Z2) ÷ Precision2*

Part 2: Answer to Part 1 ÷ (1 + (Answer to Part 1 ÷ Rollout Universe Quantity))Part 2: Answer to Part 1 ÷ (1 + (Answer to Part 1 ÷ Rollout Universe Quantity))

(Part 2 is what's known as a finite population correction factor.)

First, a few sentences on "Precision" and "Z":

Precision describes the degree of "plus/minus" uncertainty around a test panel response rate. After all, we can never know for sure by examining a test panel response rate what the true rollout rate will be.

Many direct marketers consider precision of 10 percent to be acceptable; that is, the true rollout response rate will be within 10 percent of the test panel rate a certain percentage of the time. A 1.0 percent test panel rate, for example, translates into a rollout rate of 0.9 percent to 1.1 percent.

Understanding "Z" would require a statistics lesson. All we need to know for our purposes is that it corresponds to the degree of confidence we have in the accuracy of our test panel response rate. For example, a given test panel quantity will result in confidence that, say, 80 percent of the time a test panel response rate of 1.0 percent will translate to a rollout rate of 0.9 percent to 1.1 percent.

Direct marketers would love to be very confident with very narrow precision. But this generally requires a staggeringly high investment in very large test panel quantities. Therefore, they face the difficult decision of just how much of an investment to make.

Though no one answer is correct for every direct marketer, guidelines can be posited. We'll reference the table below as we explore this issue:

Many direct marketers are unwilling to accept confidence of less than 80 percent. So let's go with this for now, combine it with a precision of +/- 10 percent, and see what that translates to in terms of test panel quantity.

The one thing we are missing is an expected response rate. Because so much testing is done on rental lists, let's focus on prospecting, where response rates are much lower than for customers. We'll assume a response rate of 0.8 percent, take a list with a universe of 100,000, and use our two-part formula to calculate the corresponding test panel quantity:

**Part 1: The numerator is 0.8 percent X (1 - 0.8 percent) X (1.282 X 1.282), which equals 1.3043 percent.**

The denominator is

*(10 percent of 0.8 percent)*X

*(10 percent of 0.8 percent)*, which equals 0.000064 percent.

Put them together -- that is,

*1.3043 percent ÷ 0.000064 percent,*and the result is 20,380.

*Part 2: 20,380 ÷ (1 + (20,380 ÷ 100,000)) equals 16,930.*Therefore, with a test panel response rate of 0.8 percent and a universe of 100,000, a test panel size of 16,930 will result in our being 80 percent confident that the rollout response rate will be between 0.72 percent and 0.88 percent. This means that 10 percent of the time our rollout rate will be less than 0.72 percent, or 10 percent less than expected. Conversely, 10 percent of the time it will be greater than 0.88 percent, or 10 percent more than expected.

Consider the problems that this uncertainty can create in circulation planning. All direct marketers have experienced what happens when a rollout response rate is significantly less than expected: a failed rollout.

Many do not realize it, but all have also experienced what happens when a rollout response rate is (or, more accurately, would have been) significantly greater than expected: based on poor test results, perfectly good rollouts that have not been exploited. This is because frequently the test panel rate is so artificially low that it dips below what's considered acceptable.

This hidden, second error of testing is particularly treacherous because it is magnified by the opportunity cost of not promoting a cost-effective rollout universe many times in the future. Considering how tough it is to find rental lists that work in today's competitive direct marketing environment, our industry is missing out on expansion opportunities.

The problem is that a test panel quantity of 16,930 is a larger investment than most direct marketers are willing to make. As a point of reference, the 135 expected responders (16,930 X 0.8 percent) is much more than the 50-responder rule-of-thumb referenced earlier.

To reduce the test panel quantity, we have to widen our precision, decrease our level of confidence, or both. So, let's run our formula under three more scenarios. For each, you can decide whether you're comfortable with the results:

1) With a test panel size of 11,826, and a precision of +/- 10 percent, we can be 70 percent confident that our rollout response rate will be from 0.72 percent to 0.88 percent. So, 15 percent of the time the rollout rate will be less than 0.72 percent, and 15 percent of the time it will be greater than 0.88 percent. And the resulting 95 responders are almost twice the 50 rule.

2) With a test panel size of 8,305 and a precision of +/- 15 percent, we can be 80 percent confident that our rollout response rate will be from 0.68 percent to 0.92 percent. So, 10 percent of the time the rollout rate will be less than 0.68 percent, and 10 percent of the time it will be greater than 0.92 percent. And the 66 responders is more than the 50 rule.

3) With a test panel size of 5,625 and a precision of +/- 15 percent, we can be 70 percent confident that our rollout response rate will be from 0.68 percent to 0.92 percent. So, 15 percent of the time the rollout rate will be less than 0.68 percent, and 15 percent of the time it will be greater than 0.92 percent. And at 45 responders, we're near the 50 rule.

How big should your test be? If you enter the formula that I have given you into a spreadsheet and run some test scenarios with response rates typical for your business, you'll have a basis for reaching your own conclusion.