Where A/B Testing Falls Short

Share this article:

Why multivariate testing trumps A/B testing.

A/B testing is easy—create two versions of your Web page that differ in only one characteristic.
A/B testing is easy—create two versions of your Web page that differ in only one characteristic.

It may (or may not) have been Mark Twain who said, “Everybody talks about the weather, but nobody does anything about it.” Today in the marketing world it's testing that's so talked about. But despite all the admonitions about the necessity of testing, too many companies aren't doing anything about it, and for good reasons.

We've been told why we should test: It's the best way to optimize a website for leads or traffic or engagement and it's the best way to boost email response rates. This compelling rationale crashes headfirst into a universally recommended methodology, A/B testing, and the victim of the collision is the testing program.

A/B testing (or ‘split-run' or ‘one factor at a time') is simple to describe and easy to do. You create two versions of your Web page or email that differ in only one characteristic; for example, the offer price or headline or background color. Some customers (the ‘target' group) get version A, while other customers (the ‘control' group) get its counterpart version B. Then you compare results and use the version that works best.

The underlying assumption is that differences in results are attributable to differences in the one factor you changed. While that may be true, it ignores the possibility that other values for your one factor may be even better, or that other factors may be more impactful. To discover the best value for your one factor, or to measure the effects of other factors, could require many A/B tests. Multiple A/B tests consume time and money to figure out whether it's the size of the offer, the color of the background, the subject line, or even the day of the week your message is received that's depressing response.

If your company is operating a high-volume website with many visitors, and if you're willing to use a good fraction of those visitors for the control group, and if there aren't too many variables you want to test, and if you're not in too big a hurry, then A/B testing could work for you.

However, every day that your website isn't optimized or your email campaign isn't delivering the sales you need is costing you money. You could be facing the same dilemma that John Lawes was facing back in Victorian England.

Sir John and his colleagues were in the midst of developing artificial manure and inventing the chemical fertilizer industry. He needed to know how much of his new, artificial manure should be applied, and to which crops? When in the growing cycle? How much phosphate to use? How much sulfuric acid? In what proportions?

Remember, this was farming, not email. The test results wouldn't be known until the end of the growing season, not the next day. And the next growing season was a year away. Sir John realized that he needed to test several variables simultaneously, and thus multivariate testing was born.

The formulas to sort out the effects of multiple variables being tested simultaneously are complex, so this methodology languished for decades. As computer use spread and the pace of marketing quickened, more companies turned to multivariate testing to get the answers they needed while the answers were still relevant. Despite the many low-cost or free programs to help companies do A/B testing, the fastest and least expensive way to evaluate your website or email campaign performance is multivariate, not A/B, testing.

Yes, multivariate is more complex. You need to know not only which variable makes your site or email most effective, but you should also know the relative weights of the several variables you're testing. Is it the offer? The amount of text? The background color? Is it your face in the upper corner of the blog that's turning off prospects? And if it's bad, how bad is it?

Another benefit of multivariate testing is its capability to measure the interaction between variables. How is the impact of a headline affected by the size of the accompanying photograph? Does a smaller headline work better with a bigger photo? Whether or not to even use a photo could affect the choice of font size for a headline.

Not being a math geek is no longer a good reason to avoid multivariate testing. Many marketing service firms will do the calculation for you. They'll even design the tests and interpret the results. Because multiple factors are tested together, fewer subjects are needed. Cost is down and time to market is compressed.

If you're still unconvinced about the value of using multivariate over A/B, here's the slick and obvious strategy: test it for yourself.

 

Mark Klein is founder and CEO of Loyalty Builders Inc.

Share this article:
You must be a registered member of Direct Marketing News to post a comment.

Sign up to our newsletters

Follow us on Twitter @dmnews

Latest Jobs: