How seriously do American organizations take data quality?
According to a June 2005 report from Dynamic Markets, managers at most companies know that customer data are their lifeblood, but they’re still unable to use customer data to their best advantage. And 77 percent of U.S. companies admit to losing revenue because of poor-quality customer data. It’s a persistent challenge despite the effective, easy-to-use data cleansing solutions on the market.
Why is customer data quality important? We all know about the more abstract concerns, the customer profiling and customer service issues. Most of us would agree that people are somewhat less likely to buy from companies that repeatedly misspell customers’ names. Matching product offers to their most likely audience is, of course, more effective when it is based upon accurate data. But there are more immediate, concrete concerns.
Take address data. A package sent with the wrong abbreviation for “Street” may trigger carrier address correction charges of $5. Return on investment from a catalog mailing won’t be as high if 10 percent of the catalogs are undeliverable.
And then there’s the problem of products lost in the mail. It’s actually quite easy to assign a dollar cost to an organization’s data quality problems when it comes to specific areas such as addresses or duplicate records.
So why haven’t companies tackled data quality issues more effectively? Part of the problem lies in distributed responsibility. According to Dynamic Markets, only 30 percent of U.S. companies have an organization-wide strategy, with central ownership, for maintaining data quality. Yet with data coming from multiple channels, and manipulated by numerous cross-functional teams, a central data quality authority is becoming critical.
Another issue is simply the range of data cleansing products and services. There are batch cleansing tools, which validate existing data; front-end tools, which validate data entering the database; and thin-client applications, which validate data coming from e-commerce sites, point-of-sale solutions and call center applications. So many potential solutions can make a simple problem seem more complex than it is.
I suggest building some flow charts. Simply analyze where your data come from, where the data go and how you use it. Then dig into the costs associated with different types of data problems. Someone in your organization has, for example, a record of delivery service surcharges for inaccurate addresses. The surcharges may be hidden under invoice categories like “other” or “miscellaneous,” but they’re worth uncovering.
Once you learn where inaccuracies are coming from, and which ones are hurting you most, you’ll know where to start. If duplicate records coming in over your e-commerce site are your big problem, you’ll need a Web-based data cleansing tool. Merging two large databases, both rife with error? You’ll want to run a batch process against the whole lot. If your call center representatives make keying errors that affect data quality, you should start researching call center applications.
Bad data eat into profits at most corporations. The good news is that inaccurate customer data are a solvable problem. Even better news is that many of the solutions that validate customer data work at a relatively superficial level, requiring few changes to existing systems and processes. The solutions are easy. What’s tough is determining exactly what data quality problems you have.