Every campaign using lists requires proper measurement — after all, marketers need to learn from their results in order to do better. Four industry experts share their insight.
Sr. acct. executive, Leon Henry Inc
One of the most important metrics to consider on list campaigns is the response rate, both gross (if applicable) and net. In this age of multichannel marketing, it is essential to track every response as accurately as possible. This requires a mailer to link each response to a specific key, ID code or finder number or, if that is not possible, to match back all respondents within a pre-determined time period to the mailed file.
But response is only part of the story. The cost to generate and complete a sale should ideally be above break-even. To arrive at an accurate cost per order, promotional costs must be considered along with a number of other variables unique to each type of mailer.
Pay-up rate is extremely important to a publisher using a free trial offer to maximize up-front response, particularly if bad debt must be serviced. The cost of gross or net premiums must also be taken into consideration.
Mailers need to evaluate the performance of a list relative not only to the mailing as a whole, but also to other lists within the same interest category. If different packages, offers or prices have been tested, it is necessary to understand the effect of those variables.
A list’s priority in a merge-purge can also impact its performance if a random priority is not used. In cases where files with the lowest net name arrangements are given the lowest priority, matches to files higher in the priority will be credited to those files. The result will be that the response rate of the lower ranking files will be artificially suppressed.
Matching reports should be reviewed carefully to determine if there are files which are losing substantial quantities of names against specific other lists, such as those within the same “family.” Occasionally, you’ll note that some files drop heavily against one another. This is often because they extensively promote to each other’s files. Having this information allows you to negotiate more favorable billing arrangements or quantify the need to drop one file for not being cost-effective.
Evaluate list performance relative to other lists within the same category
Account mgr., list brokerage division, Millard Group
When measuring results, all sales should be considered no matter what channel was the source of the order. But there is no single solution for measuring cross-channel results that is appropriate for all mailers, since there are so many factors that affect results, such as the offer, mail piece, mix of lists in the mailing, and size of the company.
Matchback is the most common method of measuring total response across all channels, where all phone, Web and retail sales are matched against the original mailing list. Whenever a match between a name on an order — regardless of channel — occurs with a name on a past mailing, the matchback process credits that list with having “driven” that sale. But matchback is not without its limitations. For example, if a name appeared on multiple mailings, to which mailing should the offer be applied?
Unfortunately, many mailers still do not utilize any type of matchback process. Instead, they continue the archaic method of applying a flat percentage allocation to all lists to account for multichannel sales. This process causes some list results to be overstated, and others understated, by channel.
On house file mailings, many mailers now conduct “hold out panel” mailings, where different groups of customers are sent different combinations of mailings and/or e-mails to determine the incremental gain of sending different frequencies of mailings, or even any mailings at all.
There has been a fundamental change in the strategy behind measuring results. Mailers once used matchback and hold out panels to determine the incremental value of Web site results. Now, mailers use these to determine the incremental value of the mail piece.
Ultimately, mailers should work with their list broker and service bureaus to determine the best way to measure and interpret results for each specific campaign.
Work with your list broker to best measure and interpret your results
VP, sales & mktg., Media Source Solutions
Marketers should look at the selects available vs. selects tested once they see their results, to determine whether or not a list has potential. If the initial results are marginal, they should investigate if a tighter hotline, mail order buyer or closer interest select is available. If results are stellar, they should continue regularly on the same segment tested, or broaden the select to expand the available universe.
Another consideration is the offer. First of all, consider how the customer responded to your offer vs. how the customer was brought onto the file. You should also pick the brain of the list manager, and see if similar offers have had success with the list, and if so on which segments.
Always take lifetime value of the customer through repeat business, cross-sells and up-sells into consideration when analyzing customer acquisition costs. Additionally, testing different offers and creatives to the same list against your control piece can produce a large impact on results.
Cross-channel customers should be measured the same as any other customers who responded to your mailing. If receiving a mail piece or catalog prompts a customer to purchase your products or services, whether they do it through the mail, by phone, online, or at a retail store, that is a home run for you.
Sending the same offer or creative greatly expands brand recognition. If not, you may be confusing your customers by not having a consistent brand image and brand experience, and thereby skewing the campaign’s results. It’s important to make sure your customers and prospects recognize your brand no matter what media you are communicating with them in.
The medium that a customer responds to and the medium your offer was sent through should both be measured and taken into consideration, in order to determine which medium is best to communicate with this customer in the future to create the best relationship and results.
Mailers should always look for more selectable lists for best results
VP, list brokerage, Statlistics
All direct marketers have access to the same information to perform analysis, yet there isn’t a universal response methodology applicable for all. Each mailer has developed an analysis that works for them — some are quite advanced, while others are very simple.
How that mailer has historically analyzed data can serve as the difference. The founder of a particular company may have established “benchmarks” to read results years ago, and those still may be the basis used today to judge list performance.
Simple analysis allows mailers to determine gross (and net) response per list, average amount spent per order and total sales per thousand mailed/or per book. Advanced analysis includes other measurements, including return on promotion investment and total investment, profitability, cost per name, and long-term value.
Today there is a lot of controversy about distinguishing customers acquired through cross-channel efforts. Everyone wants to know if customers who order via the Internet are “Web-only customers.” Will they order again, or as frequently, if they do not get included on future direct mail campaigns? Yet there are many catalogers and publishers who have indicated that they know they need to mail to these customers more to stimulate additional sales.
Rather than make a fatal misinterpretation of these data, it would be better to retain all information in the customer’s history. With today’s technology, information storage is inexpensive, so it is better to save the information. Years from now, when more data or new techniques are developed, it may be more useful.
I once had a client who summed it up very well by saying, “If you cannot read it, do not do the project.” In other words, the goal is to maximize learning on your current efforts, so that you can improve the results of subsequent campaigns and profitability.
Save all data, which may become important as new measurements develop