Test your lists for better campaign targets
The right testing programs can make or break your marketing campaign efforts, particularly in a tight economy. Four list experts tackle which tests work and how to make them work for you
Whenever I look at list test results, my first reaction is usually that the winners probably won't do as well when remailed in larger volume in the future. I used to wonder if the list owner knew that this small quantity order from a new mailer was a test, so he stacked the deck by supplying his best names.
Today, I often work with names taken from a large and public prospecting database, where the mailer controls what names are selected, yet the same “rollout falloff” still occurs. The true cause is a fundamental statistical phenomenon.
Every list test is a sample result from a universe of sample results that could be obtained from the same list. The average of all of these results will be the real, long-term response rate for the list. Any one result, however, could deviate substantially from this longterm response rate.
I am careful about A/B split tests — usually copy, design or offer differences — where the major objective is to find out which of two versions performs best and not so much to learn the response rate itself.
The general belief in the direct marketing world is that an A/B test version that does 10% to 15% better is a clear winner. However, the reality is that a 10% to 15% win is only somewhat more likely to be the true, long-term, best version — that is, there is perhaps a 65% chance that it is the best version and a 35% chance that, in reality, it is the loser.
How do you protect against the A/B test problem?
First, don't test minor differences. These almost never produce measurable long-term response differences.
Also, look for differences of 15% to 20%. Do a confirming retest if the difference comes up smaller.
Mail each version in large test volumes — 25,000 to 50,000 pieces, if possible. Large volumes are often not impractical if the mailing is already a proven success and you are simply testing refinements.
A test that's too small or otherwise fl awed can frequently cause you to choose the loser as the winner. That's worse than skipping the test and simply making your choices by intuition.
Don't cut corners on A/B testing just to save yourself time and money
CEO, Redi-Mail Direct Marketing
In a tight economy, it is more prudent than ever to make sure marketing efforts are highly targeted — not only for the bottom line, but to increase response rates, ultimately increasing top-line sales.
One of a company's most valuable assets is their customer and prospect database. It has been our experience that the traditional 40/40/20 rule applies not only to postal mailing, but e-mail marketing as well. With success of a campaign riding on 40% list; 40% offer; and 20% creative, there is an excellent case to make sure all your marketing lists are as clean and accurate as possible.
E-mail marketing campaign list testing increases your customer knowledge, fine tunes lists and creates highly targeted segments for tailored offers. There are several key, common-sense techniques for successful e-mail list testing.
For example, pay attention to basic essential housekeeping with what you already have in your house files. Remember the basics such as data cleansing, standardization, merge-purge, de-dupe, etc.
Once you've established regularity with an e-mail list, you will probably find a group of recipients who have opted-in but have not opened or clicked through in several e-mail blasts. Segmenting and testing this group of unresponsive recipients can yield important information on why they are not responding.
Split-creative testing is always a good idea. A split creative test is generally two or more messages created with one specific variable and sent to random, equal portions of the larger list. However, as a word of caution, we recommend that marketers resist the temptation to include so many variables that results are not measurable or actionable.
Additionally, one cannot emphasize the value of tracking enough. While this seems obvious, it is important to apply tracking codes, a unique campaign ID or specific landing pages to every link in your e-mail, to identify and analyze trends within a list.
Implementing a program of continuous testing and measuring the results against definitive objectives will help increase responses and make good lists great.
One more thing — all e-mail broadcasting and testing should comply with industry standards for permission-based use only, such as CAN-SPAM.
Tracking codes, unique IDs or specific landing pages help identify e-mail list trends
Often, mailers that want to test new lists try to find lists that they have not previously tested. Yet, their money may be better spent by testing additional segments of lists that they already know work for their offer. Examples of these segments can include expires, older buyers, inquiries, sweepstakes entrants, or other names being filtered by their current selections.
While recency, frequency, and monetary (RFM) selections usually add value, at times they can be limiting names unnecessarily. I am aware of a mailer that was so enthralled with “hotline names” that they would only select “quarterly hotline names,” at a premium price, from one of their best outside lists.
Yet this mailer only mailed two times per year, so they should have been mailing to the names from this list with six-month recency — that is, the last two quarterly hotlines. This would have resulted in them having nearly doubled the names available from this list source, and the performance for this list would have still been “better” than other lists in their campaign.
Another situation where “potentially good names” are left on the table is that by selecting 12-month recency, names in the 13-plus month recency are overlooked by most mailers. In the case of a special interest catalog such as fishing, cooking, or apparel, chances are the individual in the 13- to 24-month category still fishes, cooks, or dresses, they just have not purchased from that particular catalog in the past 12 months.
The same goes for a subscription list, where the mailer may use the three or six-month hotline, yet ignore the list of “active subscribers”. These individuals are still exhibiting their interest level in a particular subject by continuing to receive the magazine. And, direct marketers keep their “older files” clean, so they are quite deliverable.
The bottom line is that when you find a good list from a good source, find out what additional segments are available from this same list owner, and test them. More often than not, you will find new areas of opportunity there.
Testing unused segments of previous lists can help unearth new customers
VP of business development, NextMark
Testing is not a perfect science, but it is a scientific process. With 40% of campaign success riding on list performance, it is important to apply the right knowledge when choosing the right lists to test.
Quantitative information is readily available for today's direct marketers, but you may just need to ask for it. There are three key metrics to look at before finalizing the choices on your test matrix.
The first of these key metrics is the test-to-continuation ratio. In this case, a lower ratio is better. This ratio measures the success of a particular mailing list based on several mailers' usage. The numerator is the number of mailers who tested the list, and the denominator is the number of these same mailers who placed a continuation order for the same direct marketing offer within 12 months.
List managers may have different guidelines, so be sure to ask how the test-to-continuation ratio is calculated. This ratio is often represented on a per continuation mailer basis; i.e., four to one.
Secondly, remember to look at continuation mailer usage. Usage is available on more than 12,000 data cards. If a list does not have prior mailer usage on the data card, then ask for it. Be sure to clarify that you are interested in continuation mailer usage, unless the list is new to the market.
Also, marketers should be aware of the LPI or list popularity index. The Direct Marketing Association has a list search tool available online for its members and guests at lists.the-dma.org. If you run a search on ‘gift buyers', then you'll see the LPI scores in column five.
For example, the “Swiss Colony Catalog Food and Gift Buyers” mailing list shows an LPI of 100. New lists to market may not have an LPI published yet, but that does not indicate that they would not be successful. The list marketer can usually tell you the date when the fi le was made available for rental and/or exchange.
The test-to-continuation ratio, mailer usage, and the LPI are valuable metrics