Hitmetrix - User behavior analytics & recording

DM News Essential Guide to Lists and Databases: When to Build a Private Prospecting Database

Databases are as old as direct marketing itself. However, the ability to use technology to build a proprietary prospecting database for a single mailer is somewhat of a revolutionary concept to those of us outside the big catalog and affinity circles, where this sort of thing has been around for quite a while. And like any invention, the key is not to get caught up in its fad element but use it when the situation or environment dictates.

It’s critical to understand the basic elements of the private prospecting database as well as the environment that calls for its use. Misunderstanding can bring disastrous results. For starters, these databases are built for large mailers with a fairly homogenous targeting scheme. To understand why, consider the private prospecting database from the list owner standpoint: In the construct of the database, the lists that are invited to participate are typically those that performed well over the years in a traditional merge/purge environment.

When setting up the database, the list broker normally will negotiate down the source files’ base rates and possibly select fees in exchange for less data processing required of the list manager (they typically send their files to the database on a regular schedule such as quarterly or biannually) where usage is reported and checks begin arriving with little work required on their part.

There also is the likelihood of increased usage of the file if the service bureau can provide fractional allocation in the usage reporting. With fractional allocation, if multiple lists contribute the same name into the database, each gets “fractional” credit for that record for usage (e.g., if three list sources provide the same name/record into the database, each is paid one-third of the agreed-upon cost per thousand for that record).

Any good list owner will hedge participation on expected usage meeting or exceeding current volume levels from this mailer.

That leads us to pitfall No. 1: In our rush to keep up with the Joneses, we tend to copy what is deemed successful by the best in breed. In so doing, we often extrapolate that what works for mailer No. 1 is bound to work for mailer No. 2, and so on.

It’s crucial to know that a critical mass of expected mail volume is needed before considering the private prospecting database solution. The numbers vary by whom you talk to, but anything short of 4 million to 5 million annual mailed pieces is really not a viable candidate for such a database – the processing charges alone make this a difficult sell when comparing ROI to a traditional merge/purge environment.

Once we clear the volume hurdle, the next challenge is to have a historical understanding of the successes and failures of past targeting. Again, consider this from the list owners’ perspective: A major reason for their participation in the private prospecting database environment is the processing efficiencies achieved by eliminating the outputting of new files for each mail drop. That is, it becomes a routine task of updating the files they have cut for inclusion in the database on the set schedule agreed upon at the time of the initial file build. If it becomes as heavy a process to run counts, change selects, test segments, etc., as they do in the merge/purge environment, it becomes less appealing to provide their file in the database – they no longer have the efficiencies gained from updating the same select pulls just a few times a year, as per the agreed-upon update schedule.

Here lies pitfall No. 2: By lacking the historical knowledge of the successful lists and selects, you risk spinning the list owners’ wheels with update requests to the point where they no longer benefit from their participation. This will become clear if usage of the file does not offset the processing costs incurred by constantly changing their selects, outputting files, etc. This is not to suggest that the lists cannot be tweaked. However, frequent complete overhauls are bound to produce contentious relations with the list providers. To avoid this, it’s critical to understand the historical performance of the source lists for the mailer so as to minimize the frequency by which the list owner must go back to the well and create brand new files.

The last area of concern for this introductory analysis is the commitment of the mailer. Staying with the list owner perspective, the last thing they want to do is agree to a discounted pricing schedule, perform their due diligence on the security of the service bureau housing the data, perform countless hours of their own analysis to see whether participation makes business sense and ultimately agree to participate by supplying their file to your private prospecting database, only to have the rug pulled out from under them after one or two mailings.

On the flip side, the mailer cannot be held captive by a vastly inferior database where performance dips far below levels experienced in the traditional merge/purge environment. There must be an agreement, formal or otherwise, that the mailer will give the private prospecting database a chance to perform and not make any knee-jerk decisions. The list owner also must have the understanding that if his file does not perform over time, in fairness, it should be removed from the private prospecting database.

As a broker or mailer, your reputation hinges on this delicate balance. Jumping ship too soon can doom the possibility of creating a new private prospecting database if you cry wolf the first time. Also, as a list manager/owner, understanding the economics of the database is really no different than the merge/purge environment – if the file doesn’t perform, it likely will not make it into subsequent mail plans.

To avoid this file commitment conundrum (pitfall No. 3), it’s critical to have historical knowledge of those files that work well to serve as the backbone of the private prospecting database. As a list broker, I have worked with clients who were nowhere near the sophistication needed to develop a private prospecting database for the reasons cited above, yet demanded that we build one. We refused, effectively resigning their account, which has since caused their new broker headaches.

On the other hand, we work with mailers where this solution is a no-brainer, but because of past failed attempts when they weren’t ready, they are reluctant to try again. This is even though their business screams for the ability to pull data quickly (private prospecting database mail files are typically available within a day compared with two to three weeks in a traditional merge/purge environment) and pay discounted CPMs on the net quantity mailed (typically there are no run charges on the unused records in the private prospecting database environment). Also, their business calls for the ability to fully analyze their prospect universe as they would their house file (within the database, they would have the ability to query their available mail universe, slicing and dicing as needed to unearth hidden areas of opportunity that coincide with house file responders who are outside the previously understood target).

Just because private prospecting databases are gaining prominence outside the tried-and-true catalog and affinity circles doesn’t mean they’re a panacea for every mailer. Understand your limits, yet open your eyes to new ideas that fit within your environment.

Total
0
Shares
Related Posts