Improve Use of Quantitative Methods

Share this article:
It is remarkable how many direct marketing companies fail to fully embrace sophisticated quantitative techniques to better target prospect and customer promotions. Several years ago, a large study found that more than half of catalogers still use recency, frequency and monetary value cells rather than statistics-based predictive models. Perhaps this is understandable for mom-and-pop organizations. But this percentage was only marginally lower for large catalogers.


Direct marketers also display a corresponding lack of sophistication in setting up test designs. Many organizations are unschooled in the basics of experimental design. Some companies, for example, still think that 50 responders is always an adequate amount to make test panel comparisons. Invariably, they are unpleasantly surprised when informed about the degree of reliability to which this translates.


Even worse, some direct marketers run test panels of 5,000 regardless of the anticipated response rate, even those whose typical prospecting rates are just a fraction of a percent. Often, these companies make important decisions based on 15 to 30 responders.


So, the broader question is why there is such resistance by direct marketers to embrace quantitative approaches when making prospect and customer circulation decisions. There are, we think, several basic reasons:


Seat-of-the-pants direct marketers. Many direct marketing operations are run by seat-of-the-pants circulation professionals, and some by entrepreneurial founders or founder-families. These companies have found a viable niche, and have grown and even thrived using homegrown approaches. Within such organizations, methods of operation tend to become rigidly codified over the years. Often, this is even true for companies that have evolved into large entities.


One cataloger, for example, grew from a niche operation to a $250 million retail/direct hybrid. However, throughout this explosive rise in scale, the organization clung to its traditional, homegrown, cell-based method of selecting names for mailings. This was a convoluted process that required several days of effort by the circulation manager for every promotion, and at least an equal amount by its service bureau.


A back-end analysis by a consultant indicated that, in reality, this cataloger's complex selection strategy generally boiled down to simple "de facto" criteria. House multi-buyers, for example, would be promoted if one of the following conditions were met:


o Bought from any of several core titles within the past 36 months; that is, if a purchase was made from any of the books, then the customer would receive all of the books.


o Had a company credit card.


o Hit against an outside rental list.


o Had an Abacus score that fell within a certain range.


Despite this clarification, the cataloger clung to its torturous selection strategy. The thinking was that the traditional method had been successful, so why risk trying something else?


Data miners who alienate. To exacerbate matters, many direct marketers have had terrible experiences with data miners who failed to deliver on their promises. These companies have become extremely resistant to sophisticated analytical techniques. For example, we frequently hear that, "We tried regression modeling, and it didn't work all that well." Or, "We built a model and it looked good for a while, but then it fell apart."


The problem generally lies with the types of individuals who build the models. Our industry is filled with analysts who understand the mechanics of building regression-based predictive models. However, a much smaller number have the experience and ability to approach projects as insightful data detectives and savvy business people.


Building a potent model that stands up over time generally requires acute data insight and an understanding of the mechanics of direct marketing. Consider, for example, that the following key decisions in any predictive modeling project require "in-the-trenches direct marketing" experience rather than an advanced degree in statistics:


o What mailings/drops should make up the analysis file?


o What ratio of responders versus non-responders should be used, and should it vary by mailing/drop?


o Should a single- or multiple-model strategy be employed?


o If a multiple-model strategy is used, how should their identities be determined?


o What are the appropriate dependent variables; that is, response, gross sales, net sales, gross margin, or what?


o How should missing values be handled, especially with continuous potential predictors?


o Should outliers be eliminated or capped?


Unfortunately, most companies underestimate the importance of "in-the-trenches" direct marketing knowledge when interviewing potential data miners. Hence, they tend to hire pedigreed, advanced-degree statisticians who proceed to make every rookie mistake in the book.


Many of the best direct marketing analysts do not have an advanced degree in statistics. Instead, they are "data detectives" with years of DM experience. Of course, seasoned data detectives with advanced statistics degrees are the ideal. However, they are rare.


Statistically unschooled list brokers. Resistance to quantitative techniques also is seen in the many list professionals who lack statistical sophistication. Making test recommendations is a core function of the list brokerage business. Nevertheless, many practitioners have never had a statistics course.


Recently, a veteran list broker recommended test quantities of as few as 5,000 to a direct marketer with prospecting response rates as low as 0.25 percent. The resulting 13 responders on a list with a 0.25 percent response rate would be far from adequate to confidently read the results.


When confronted with a basic statistical formula for calculating appropriate test quantities, the broker refused to believe in its validity. She was skeptical when informed that, except for niche lists with very low rollout quantities, universe size generally has only a secondary effect in determining the appropriate test quantity.


For example, assume an expected response rate of 1 percent, and that we want to be 80 percent confident that the actual response rate will be at least 0.9 percent. With a rollout universe of 50 million, we need a test quantity of 6,984. With a universe of 50,000, we require a quantity of 6,129 - just 855 fewer names. Only when we get down to small universe quantities are the required test quantities markedly smaller. A universe of 10,000, for example, requires a test of 4,113.


One of the list broker's comments about the basic formula for calculating appropriate test quantities summed up the lack of statistical sophistication that is all too common in our industry, "If this formula is valid, how come I've never run into it before in my 20 years in the business?"


The bottom line is that there is a major opportunity in direct marketing for improved analytical sophistication of underlying circulation strategies.


Share this article:
You must be a registered member of Direct Marketing News to post a comment.

Sign up to our newsletters

Follow us on Twitter @dmnews

Latest Jobs:

Featured Listings

More in Data/Analytics

One Third of Companies Fail to Measure Data Quality ROI

One Third of Companies Fail to Measure Data ...

Twenty percent of companies assume their data quality tools pay off, while another 10% doesn't monitor ROI at all.

Ensighten and Anametrix Unite in an Open Relationship

Ensighten and Anametrix Unite in an Open Relationship

Ensighten's purchase of the analytics company is about giving ultimate ownership of data to marketers, says CEO Josh Manion.

The Perils (and Positives) of Vanity Metrics

The Perils (and Positives) of Vanity Metrics

Experts break down the up- and downsides of popular vanity metrics, such as Facebook likes and Twitter followers.