It’s almost time to resolve to do what we didn’t get around to doing this year. And though many of our newest resolutions also may go unfulfilled, here’s one I challenge our industry to keep: clean up our metrics.
The lack of standardized e-mail metrics has been that dirty little not-so-secret secret that we’ve ignored for too many years. No more. The long-awaited shift of media dollars to online has begun. If e-mail is to get its share, we must clean up the way we measure things, and we’ve got to do it soon.
As e-mail marketers, we probably didn’t need the Direct Marketing Association to tell us that the e-mail channel is undervalued and underfunded. The DMA’s recent Power of Direct Marketing economic impact study found that in 2005 e-mail received a pittance compared with the funding bestowed on other online channels ($300 million versus $12 billion) despite an ROI that’s more than double ($57.25 versus $22.52). Nonetheless, the report was a good wakeup call and should cause us all to question what’s wrong with this picture.
My explanation is straightforward: E-mail has an image problem, and it’s one of our own making. We talk incessantly about what’s wrong with e-mail: the flood of spam, phishing attacks, crisis in deliverability, authentication challenges, new Federal Trade Commission regulations, inscrutable Internet service providers, changing protocols, irrational blacklists and filters, conflicting bounce codes and so on.
I’m not suggesting that these issues aren’t real, but we’ve got to stop making e-mail marketing sound like the proverbial Gordian knot that only a so-called expert could unravel. It’s untrue, and it’s producing the perception that e-mail is so trouble-ridden and complex that it may not be worth the effort. At least, that’s what the DMA study suggests about where companies are investing their online dollars.
So why don’t we talk about what’s right with e-mail, why it’s still the killer app with potential for direct marketing that no other online or offline channel can match?
We don’t because we can’t.
We can’t confidently compare and contrast results on e-mail marketing campaigns because no consensus exists on what our core metrics – delivery, open, click, conversion – mean or how they should be calculated and reported. And without hard, reliable stats on their e-mail results, what do companies rely on when deciding their multichannel budget allocations? They default to all the anecdotal evidence about what’s wrong with e-mail.
Here’s a quick look at the sorry state of e-mail metrics:
Delivery rate. Given the importance e-mail marketers attach to deliverability, you’d think that we’d have gotten this metric right by now. You’d also think that it would be a straightforward calculation given what’s being measured: That if e-mails don’t reach recipients, they’d be counted as undelivered (failed) and that all forms of failures – ISP blocks, hard bounces, soft bounces, failures for other reasons – would be deducted from the delivery rate.
But you’d be wrong on both counts. Not only is failure data often not fully captured or correctly interpreted, there’s no agreement on what the failure types mean or how they should be treated in the calculation. Too often, certain types of failures, like soft bounces, technical failures or ISP blocks, are excluded from the calculation or even deducted from the base as if they were never mailed at all.
More than two years ago, JupiterResearch termed this “metric manipulation” and called for standardization across the industry. Sadly, little has changed since. So it’s no surprise that in a recent survey by the Email Experience Council, nearly two-thirds of e-mail marketers weren’t sure, didn’t know or just guessed about how their e-mail delivery rate was calculated.
Open, click and conversion rates. Our metrics mess extends to these measures, too. Do we count unique or total opens and clicks? Are opens a percentage of what’s mailed or delivered? Are clicks a percentage of what’s mailed, delivered or opened? Are conversions a percentage of what’s mailed, delivered, opened or clicked? You’ll find all possible permutations in play.
Other e-mail metrics deserve scrutiny, but I’d be content just to get standards in place on the basic ones. E-mail marketing as a channel requires standardized metrics to claim its rightful place at the table with other direct marketing disciplines. After all, DM is all about measurement and being a cumulative learning experience.
E-mail marketing practitioners deserve such metrics to reliably measure the results of their efforts, make valid comparisons across deployment solutions and providers, and demonstrate the bottom-line value of e-mail relative to other channels.
It doesn’t matter how we got into this mess. What matters is getting ourselves out, and recognizing that we’ve got collective interest in the outcome. I applaud the EEC for taking up the challenge and encourage all who care about e-mail’s future to join in the effort and resolve to fix e-mail metrics in 2007.