What's in a Number? What You Want to Believe
Every day it seems there's another study or report released that purports to tell what is happening or forecast what will happen in the very near future. The reason I say near future is that these forecasts only extend out four to five years, which traditionally was a short time frame. However, in the era of Internet Time, four to five years seems to be the equivalent of 20-30 years pre-Internet -- another example of the adage that technology's half life is cut in half by each new technological innovation and, in this case, being compounded by the new technology introducing a new medium: e-commerce.
However, the numbers that are bantered about for e-commerce (and in terms of impact in the headlines, this is the primary area we hear most about) are almost mind-boggling. Just a year ago, forecasts for e-commerce were on the high range at $13 billion in 2003. Now we have a report from Cisco Systems, based on a study done by the University of Texas, that e-commerce will exceed $1 trillion by 2003. This is a quantum leap by any standard.
We would not want anyone to think that we are implying that since this report emanates from Texas that it might be slightly inflated. However, with the estimates continuing to range all over the scale, there is a great deal of skepticism about the Internet's real impact. And this is compounded by the articles talking about the cannibalization of existing business divisions by e-commerce.
Where Does the Problem Lie?
When one gets over the shock of the estimates seeming to compound geometrically, you realize that perhaps the problem lies in either what is included in the estimate or in the methodology being used to calculate the impact. In the first case, we're referring to what is
included in the estimate of "e-commerce."
It was only a year or two ago that in an attempt to show the growth of retail sales via the Internet, analysts suddenly started including categories such as travel and stock transactions. Never before in the reporting of retail sales had anyone -- including the Commerce Department -- included travel and stock transactions in retail sales. But they were included in the "Internet retail numbers," and no one challenged this. At the time of the inclusion, the Web's growth without them would have been dramatically lower than reported. So part of the problem to understanding the impact of Internet sales was caused by a redefinition of a standard benchmark that at the time and until now went unchallenged.
As an example of what can be considered faulty methodology in analyzing e-commerce, a new study hits close to home for catalogers. This study, sponsored by Octane Software and conducted by Vanderbilt University's Owen Graduate School of Management, rates 10 catalog sites on their customer service capabilities. As this article is written only two days after the release on this study hit the news wires, we cannot judge its impact in the media. The release reported that Lands' End and Victoria's Secret tied for first place. When one reads the full report, he will see the methodology employed leaves a lot to be desired. We will not take you through all of our criticisms but use two for illustration of what we consider faulty research methodology.
First, they used a Likert scale, which is where a subject's attributes are rated using a scale from terrible to great. Typically, to allow for proper weighting of such subjective evaluations, the scale range is from 0 to 7 or higher. In this study, the scale used was only 0-3, which frankly is too allow for meaningful differentiation. Second, while they measured 20 customer service attributes, there was no weighting among them. All attributes were treated as if their impact on consumer behavior were equal. The attributes included:
* Visible toll-free phone number.
* Organization of links on home page.
* Shopping assistant.
* Returns and cancellations information.
It is obvious to anyone in cataloging or direct marketing that the impact of these attributes on a consumer's buying behavior is not equal.
Problem Gets Compounded
Redefining benchmark standards and using suspect methodology is bad. This is compounded when the media report the information without any attempt to clarify it. Yet, you cannot really fault the media. They see something on the news wire coming from a respectable organization and they publish it. Reporters and editors are facing deadlines and
frequently lack extensive knowledge on many of the subjects they cover.
So we have a redefinition of commonly understood terms, suspect methodology as the basis of a study and widespread dissemination of this misinformation by the media. This is compounded further when the story is run by one of the three or four major business media: The New York Times, The Wall Street Journal, Investors Business Daily and San Jose Mercury News.
What is the solution? We aren't sure. Who has the time and resources to police the research on the Internet -- and, even if they did, would anyone listen? One thing that can be done is for every company to police all stories that affect them and be sure to keep the record accurate. In this case, we feel it would be remiss for any company to use a report such as this to promote themselves.
We must conclude with a caveat that there are many valuable and fine studies being done on the Internet, such as the ones conducted by BixRate.com or Boston Consulting Group for Shop.org. So we aren't condemning all studies on the Internet and e-commerce. Nor are we implying that the Internet will not be a major change to all sales, just that there is a lot of bad information out and it behooves everyone to try suppressing the bad and promoting the good. Only that way will we educate the public and investors as to what is important in evaluating a company or an industry such as cataloging.
Bill Dean is president of W.A. Dean & Associates (www.dean-assoc.com), San Francisco, which publishes monthly The Dean Report, cataloging's leading strategic and financial newsletter.