Hitmetrix - User behavior analytics & recording

Answers to Data Mining Questions

This is part one of a two-part column.

Many favorable e-mails were received as a result of my article “Answers to 4 Common List Questions” (DM News, Nov. 4, 2002). Therefore, I am repeating the format with a focus on a hot topic: how to integrate statistics-based predictive models into a coordinated, multichannel contact strategy.

Question No. 1. We sell four services across two main channels: direct mail and telemarketing. Also, we employ e-mail as a customer-side support channel. We are developing a coordinated, multichannel contact strategy across our four services. Therefore, we are replacing our outdated recency, frequency and monetary cells with regression models. For each promotion to a prospect, inquirer or customer, please outline how the models can help us answer: a) which service to offer; b) which channel to employ; and c) what the optimal channel mix should be, as well as the best timing and frequency across promotions.

It is good that you recognize the general incompatibility of RFM cells with modern database marketing. For companies with several services and multiple channels, the large number of cells generated by a typical RFM approach will result in a “proliferation quandary.” You will end up with a choice: either too many cells to be practical or too few to be effective.

The answer is also appropriate for companies that offer multiple products. It applies just as well to catalogers with several titles as it does to banks with multiple services such as home equity loans.

The answers to “a” and “b” – which service to offer and which channel to use – are based on the construction of a statistics-based model (or models) for each permutation of service and channel. The goal is to accurately estimate the profitability of each service/channel permutation.

One complication is that model scores are not directly comparable. The reason involves statistical theory that is beyond the scope of this article. For example, assume that a household’s score for the Product X model is better than that of the Product Y model. Many non-data miners will be surprised that, from a purely statistical perspective, this does not necessarily mean that the household should receive a promotion for Product X.

There are valid ways to compare models. A preferred method at Wheaton Group is to focus on financial projections tied to each model segment. This avoids any technical “landmines” and provides the bonus of being a business-oriented solution.

The answer to “c” – the optimal channel mix as well as the best timing and frequency across promotions – is more a function of testing rather than predictive modeling. A series of well-constructed longitudinal test panels must be created to calculate metrics such as: the amount of cannibalization across products/services; the amount rate of cannibalization within products/services by re-mails; and the effect of time between promotions on these different cannibalization effects.

The mechanics for executing everything described within this answer – in an environment of multiple services (or products), channels and seasons – are far from trivial. However, it is an absolute requirement for arriving at contact management strategies that are data-driven and financially focused.

Question No. 2: How do I know when it is time to rebuild a model?

A model likely will have to be rebuilt whenever one of two things happens: A change occurs in the underlying structure of the source data, or a change occurs in the fundamental dynamics of the business – when a totally different type of customer is being attracted to the product or service. Models extrapolate from the past to the future, based on an assumption of environmental constancy. When constancy is disrupted, extrapolations become problematic.

Models generally are remarkably resistant to non-dramatic changes in creative and price. As long as the fundamentals of the business remain reasonably stable and the structure of the source data does not change, models are likely to retain their potency for years.

There is a way to determine the likelihood that model performance will deteriorate. Every time a model is scored in a production environment, profiles should be run on each segment. These profiles should include averages and, optionally, distributions for every one of the model’s predictor variables. They also should include whatever RFM and/or demographic elements are helpful for “painting a picture” of the best customers versus the worst, as well as those in between.

These profiles should not diverge significantly from profiles run off previous successful production mailings and profiles run off the original data set used to validate the model. The extent to which divergence has occurred is the extent to which model deterioration is likely to be encountered. Sudden, dramatic divergence generally is the result of a change in the structure of the source data. Gradual divergence often is symptomatic of a change in the dynamics of the business.

Next month: data mining issues.

Related Posts