Hitmetrix - User behavior analytics & recording

Use the Essentials of Catalog Response Report Design

Without solid, timely and complete response reports, catalogers cannot fairly and fully evaluate the results of list, offer, copy or other tests. Yet despite their critical importance, too many catalogers produce reports lacking in helpful detail.

A well-designed report not only highlights winners and losers but also gives insight into why some media, lists or tests work better than others, contrasts results to forecasts and highlights back-end problems. To perform these functions, response reports need the following characteristics:

Brevity. For users to spot key numbers, reports must be compact, with no more than three lines of detail per source code, fewer if possible. For companies that don’t forecast results at the source code level, or that don’t experience significant order returns or cancellations, relevant details often can be limited to one line.

Emphasize relative measures. Keep reports brief and meaningful by showing relative rather than absolute response measures. The latter can be limited to orders, sales and profit dollars.

Appropriate relative values include response percent, sales and profit/loss per piece, per thousand and/or per buyer, and promotion cost per contact, per thousand and/or as a percent of sales. Show gross margin percent if margins vary widely. By relying mostly on relative measures, users more easily spot over and under performers along with their potential cause (high returns, bad debt, low margin).

Comprehensiveness. Reports should include all elements meaningful to a business. For businesses with more than a 1 percent or 2 percent return, cancel, bad-debt or other back-end rate that affects sales or profits, report these factors for each source code as a percent of sales. It is not unusual for individual lists, media, offers or other alternatives to produce consistently above- or below-average back-end experience factors.

Project results. Use experience-based curves to project likely final absolute and, more importantly, relative results from responses to date. Though often unreliable for the first week or two, curves are typically stable by week 3. To improve curve reliability, base them on weeks of substantive order receipt, where week 1 is the first weekly period in which more than a few orders are received, rather than weeks from drop or in-home date. Curves also may be used for e-mail campaigns, though most responses on Internet initiatives arrive in days, not weeks.

Though a single curve may be used for all mailings, sophisticated mailers with numerous drops through the year should develop seasonal curves further adjusted for any promotion with a short real or implied cut-off – for example, a final pre-Christmas mailing, or one with an incentive to act by a deadline. Where order size varies consistently over the order receipt cycle, develop and apply separate order and dollar response curves. Where merchandise return rates are much above 12 percent, consider a separate curve to predict final returns based on actual to-date numbers.

Show profits. With profits rarely a constant percentage of each source code’s sales, response reports need to provide a reasonably accurate profit measure for each code. To do this, subtract from merchandise sales net of returns and cancels and take into account these factors:

· Cost of goods sold. Use either the business’ average cost or, if the database has it, the actual cost of items ordered.

· Order processing and fulfillment cost. This typically will be an average amount to handle and ship an order. Average shipping charges often are subtracted from this amount, as it’s preferable not to include them in the average order amount. Where shipping charges are included, make sure cost of goods, if based on a percentage of sales, uses the correct sales base.

· Promotion cost. Include not only costs for print, creative, paper and postage, but the effective rental charge for each list (net rental charges, less discounts, divided by the final quantity mailed) and the cost to merge/purge, overlay, prep lists for mailing, etc.

· Overhead. Corporate policy usually determines whether an overhead cost reflecting rent, executive salaries, corporate services, etc., is subtracted. While I prefer to show profit net of overhead, some firms prefer not to.

Allocate miscellaneous orders. Growth of unattributable Internet orders, in particular, makes it even more important that catalogers fairly allocate unsourced orders attributable to a promotion to a specific source. This is so profits can be calculated on post-allocation sales. Failure to allocate such orders increases the odds that sources with an acceptable response will not be reused because attributable sales don’t exceed the set minimum acceptable level. The most common allocation method remains proportionate, with miscellaneous orders allocated based on each code’s attributable sales as a percentage of total attributable sales.

Provide appropriate totals and sub-totals. Good response reports show a grand total for the campaign, sub-totals for house and prospect lists and sub-totals for each split test. Providing these numbers lets users easily compare results to a peer average and compare the performance of tests.

Total
0
Shares
Related Posts