Hitmetrix - User behavior analytics & recording

Who Responded to the Promotion?

The tracking and measurement of a catalog marketing campaign is essential to assessing its value. In addition to quantifying the success of the effort as a whole, it provides a method for determining the particular message, creative and incentive strategies that are most effective.

By leveraging this information, the direct marketer can select the promotional investments that will maximize long-term revenues and profits.

Promotional tracking and measurement was a straightforward process in 1981 when I first got into the direct marketing business. I had P&L responsibility for several continuity and subscription businesses. My employer had only one order channel: direct mail. There was no call center. Orders would arrive through the mail on pre-printed forms with source codes. Every day I would receive “flash reports” from my operations center informing me of the latest response information for my promotions.

Things are dramatically different today. With multiple overlapping promotion and order channels, it can be almost impossible to determine who responded to a given offer. Often for a given order, the promotional and order channels are not the same. One individual, for example, might receive a catalog and a follow-up e-mail, then order over the Web but fail to enter a source code.

The Web as a confounding factor. Attributing Web orders to outside rental lists and internal house segments is particularly problematic. This is because it is common for fewer than 25 percent of Web orders to include a source code. Most direct marketers attempt to counteract this by employing a universal attribution factor (“extrapolation percentage”) to allocate non-source-coded Web orders to outside rental lists and a second factor for internal house segments.

The use of universal attribution factors implicitly assumes that Web orders as a percent of total orders are consistent across rental lists and across house segments. Therefore, employing universal Web attribution factors when calculating metrics such as cost per order and contribution per thousand can be very misleading.

Several rental list results, taken from a recent single season for a niche cataloger, illustrate the degree to which Web orders as a percent of total orders can vary.

First, within two outside lists that offered virtually identical products, 38.6 percent of orders were generated via the Web from one list vs. the other list that produced 23.5 percent of its sales through the Web. Second, within two different selects from a single list, the previous catalog buyers generated 54.8 percent of sales via the Web vs. previous Internet buyers, who produced 75.4 percent of sales through the Web. Finally, within a cover change test for a specific list, Cover A recipients produced 10.4 percent of their orders on the Web vs. Cover B recipients, who produced 23 percent of business online.

Among the house file, the differences were just as dramatic.

Four customer segments generated different percentages of sales via the Web, including: Segment A, 15.7 percent; Segment B, 6 percent; Segment C, 69.2 percent; and Segment D, 44.7 percent. Also, within an RFM segment, previously Web-only buyers generated 71.2 percent of their orders on the Web compared with previously phone-only purchasers, who only generated 1.6 percent of their orders online.

Finally, from among house file names who have asked for information but not yet purchased, Source A’s names produced 12.5 percent of their orders via the Web while Source B’s names produced 82.8 percent of their orders online.

There are two considerations that can magnify these differences when calculating metrics such as cost per order and contribution per thousand. First, the cost to process a Web order is likely to be different from the cost to process a phone order. Second, the average order size for Web orders can be significantly different from phone orders.

An example. Assume that two hypothetical customers, Dave and Marilyn, have each ordered twice. Each ordered the first time on Dec. 7 and the second time on Dec. 21.

Dave’s two orders have come in over the Web. He did not enter a source code either time. Dave had never been contacted before his first order. Therefore, there is quite a bit of evidence that Dave “found” the direct marketing company on his own without being prompted by a promotional piece.

It is reasonable to conclude that Dave has a significant chance of ordering again on his own, whether or not he receives any subsequent contacts. Nevertheless, it is likely that follow-up promotions will increase somewhat the probability of his responding again.

Marilyn ordered both times over the phone and provided a source code. Both times the source code corresponded to the same direct mail prospect list from a late-November drop. Therefore, there is quite a bit of evidence that Marilyn would not have “found” the company on her own without having first been promoted. It is reasonable to conclude that Marilyn has less chance than Dave of ordering again without the stimulus of follow-up promotions.

Assume that a subsequent direct mail piece was dropped on Feb. 1 and that Dave and Marilyn both responded on Feb. 6. Unfortunately, both executed their orders over the Web and failed to provide a source code.

Did Dave and Marilyn respond to the promotion, or was it just a coincidence that their orders came in five days after the drop? Unfortunately, there is no clear-cut answer. All we have are probabilities – and different ones for Dave and Marilyn.

What exactly are these probabilities?

The answer. There are several things that can be done to increase clarity within multi-channel response attribution.

Savvy direct and database marketers have long understood that retail is what is known as an “open loop” environment. In open loop environments, individuals often make purchases without being promoted. As a result, there is no guaranteed cause-and-effect relationship between the promotional stimulus and subsequent response.

In contrast, many traditional direct mail marketers – catalogers, continuities, fund-raisers and the like – historically have operated within straightforward “closed loop” environments. However, the advent of the e-commerce channel has opened up even the most “closed” of loops. There are two approaches that must be considered.

First, promotional results must be tracked incrementally and compared with identical groups that received different stimuli. Given that the two groups are alike in all other ways, significant differences in metrics such as response rate and revenue can be attributed to the impact of the promotion.

Second, long-term test strategies must be developed so that the cumulative incremental performance of multiple promotions – the so-called “building effect” – can have sufficient time to manifest itself. This is because, in many open-loop environments, a single promotion can display little, if any, incremental improvement versus the “baseline.” This is a manifestation of the fact that it can be difficult for a single promotion to “break significantly through the clutter” of overlapping multi-channel efforts.

Borrowing open-loop measurement techniques from the world of retail is a complex process. It is the only way to answer the seminal direct marketing question: “Who responded to the promotion?”

Total
0
Shares
Related Posts