Hitmetrix - User behavior analytics & recording

Evaluating Neural Nets and Regression Which to Use and When

During the past few years, more database marketers have debated using a new modeling technique called artificial neural networks to forecast response.

The technique’s proponents – prominent among them are software vendors promoting their products – say it is highly accurate because it exploits “nonlinearities” commonly found in direct response data. They say even nontechnical users can build neural net models quickly because this artificial intelligence tool can be automated without loss of accuracy and requires less data preprocessing than regression. A salesman from one of the vendors said, “Your secretary can build a response model before lunch.”

However, many model builders – statisticians and others – view regression as the gold standard by which to evaluate techniques because of the extensive

theory and applications built over decades. They concede, though, that the quality of models corresponds directly to the time spent and the expertise of the model builder. Unfortunately, as shown by rapidly escalating wages, there is a shortage of statistically savvy people skilled at building database and e-commerce models.

So, what are database marketers to do?

Should they use neural network tools as opposed to regression?

Do neural networks measure up to regression for building predictive models in direct response?

What do they excel at? Can just anyone use them successfully?

When can they be used, and when should they be avoided?

To find out, we compared the performance of neural nets and regression on the same, real data sets.

Two data sets were used: a smaller, simpler set and a larger, more complex set. The simple data set consisted of 11 data elements (variables to model builders) selected by our statistician from among more than 100 as the most predictive. This data set was cleaned of errors and outliers – a statistical term meaning extreme observations that are detached from the remainder of the data – to present the automated routines with an easier problem.

A leading direct marketing modeling software package was used to build neural network models that predicted response to a catalog mailing by using its automatic setting. Also used were comparable settings to build automated regression models. The results showed that neither technique could claim to be superior. Both produced good models when presented with this carefully selected group of explanatory variables.

As a result, we wondered whether an experienced model builder could outperform these two automated models. An experienced statistician built a regression model manually by using SAS, the best-selling statistical package. He produced a model with only a slightly higher lift than the two automated models. The conclusion: With the preprocessing already done on this data set, even an experienced modeler had a hard time producing results superior to the automated package.

So, to test the importance of preprocessing by an experienced analyst – activities such as error substitution, outlier detection and variable selection – the same models were built on a complex data set. This contained more than 100 variables, some of which included errors, missing values and outliers – values that were very large as compared with the rest.

The automated software’s routines to detect errors, handle missing data and deal with outliers were set at their default settings while the statistician dealt with these problems in a manner commonly accepted as good statistical practice.

The results were clear: The statistician’s manually built regression model had a lift significantly higher than either of the automated models. These results reflect a general trend in our testing – that automated ANN and regression models perform about the same, but neither performs as well as a regression model built with care by an experienced model builder.

This difference is even more marked when models are evaluated not on a holdout sample, but on the results of in-the-mail tests. The larger number of variables used by automated techniques – especially ANN – damages their accuracy in the mail and over time.

So, do neural network tools really measure up to regression for building predictive models in direct response?

Yes, they produce results comparable to regression models built with the same automated package.

What do they excel at? We have not been able to identify a class of models (for example, response, cross-sell, attrition) at which ANN consistently excels.

Can just anyone use them successfully? No. Experience counts. The automated package produced better results in the hands of an experienced model builder than with a novice.

When can they be used and when should they be avoided? ANN software can be used just about anywhere that regression can. Both types of automated models are adequate when the number of pieces to be dropped is small. However, large mailings and mailings with thin margins often can justify the higher accuracy and cost of manually built regression models.

The most dramatic differences between neural networks and regression techniques lie not in their accuracy but in the ease of implementing them. The former are more complex, so more difficult to explain. Hence, many analysts treat them as black boxes and do not even try to interpret the business significance of the variables used in the models.

Regression models, however, are easier to explain. Model variables often can be traced to real-world phenomenon. For example, “Response increases with age, but at a decreasing rate after 50.” That is why it is said that regression emphasizes human intelligence whereas neural nets emphasize artificial intelligence.

Richard Deere is president of Direct StatSoft Inc., New York, a consulting firm specializing in data mining and statistical modeling for database marketing and e-commerce.

Total
0
Shares
Related Posts