Hitmetrix - User behavior analytics & recording

How Not To Flunk The A/B Test

It’s a silver bullet! It’s a miracle solution! It will boost conversions by a gazillion percent! 

Sorry, A/B testing is none of those things. It has long been used as a tool to gauge the difference between a hunch and a hack. In other words, you don’t have to guess which approach works. Try both. The right approach will reveal itself in the data. 

But try telling that to an anxious client who needs results…now. 

Conception, Meet Misconception

So the web site can use an overhaul to increase your conversion rate, but you only have a hunch about what might be going wrong. Before any test begins, one must start with a hypothesis, an informed assumption that can be confirmed by testing it. 

“The whole goal of the experiment is to be data-driven in the design process,” said Jonathan Epstein, SVP, strategy and alliances, at Sentient Technologies. “You’d be surprised how many people skip this step….Without it, you’re just trying stuff.” People can be funny when it comes to data. “We see people run A/B tests and disregard the results,” even when the test results were posted with 95 percent confidence,” Epstein continued.

Or they stop the test too soon. “If sales drop one percent at the beginning of the test, you really have to commit to see the test through,” Epstein added. Then there is peaking.  “Variance is a sine wave of differing magnitudes,” he said. “When variance is up, then some people …will call a winner too early.” Then they end the test.  

Anyone can do it 

New tool sets are putting the power of A/B testing into the hands of non-experts, who can easily use the tools to test everything they do. “[The] widespread adoption of new digital technologies now requires ‘at-scale’ deployment of winning, personalized digital experiences.” noted Carl Tsukahara, CMO at Optimizely. “Companies of all sizes are experimenting on everything from images, language and layouts, to search results, pricing, and sorting algorithms.” Tsukahara continued. “ All of this is designed to win the hearts and minds of the customer at each digital moment in order to maximize revenue, loyalty, share of wallet and engagement.” 

Fundamentals matter 

“Go back to the goal,” said Drew Burns, Group Product Marketing Manager at Adobe Target. “It’s nothing short of predicting the future.”  Clients have to stick to the math behind the testing product, Burns said. You don’t want to infer too much into the results.

Traffic can be an issue here, as slow-moving web sites take longer to generate data than high-volume sites.In either case, enough  data must pass through the test to draw a statistically reliable conclusion, or you can trust the algorithm to pick a winner, Burns explained. Insufficient data can yield false positives. 

“There are no shortcuts around the math,” Burns said. “Math is math.” 

Also, take care of what metric you choose. Clients who want fast results will focus on click-throughs, but they may be ignoring conversions or booking, Burns noted.

Patience is golden 

“How soon can we see the results?” That is a question frequently asked by clients, noted Maria Caciur, heads of manager services at Omniconvert.com. The client often has no idea of the methodology being used to get those results. 

Omniconvert’s services requires the client to be patient. It takes four to six weeks just to research and analyze the client’s online business, followed by 2-4 weeks to run tests. On average, test results come in roughly three months, with client break-even ranging around six months, Caciur explained. “After one year, we have definite positive results.” 

While one is testing, one must also manage customer expectations. “Always have something rolling,” Caciur said. You can run four tests at the same time. One of them is going to turn out okay. “Keep the positive vibe going,” she said. Even a test that suffers negative ROI can still show a positive path forward.

To A/B or not to A/B? 

You can’t really blame the clients. Their specialty is online retail, not A/B testing. New toolsets are empowering professionals to do their own testing and analysis, but just because they have the right tools does not mean they know how to use them. Imparting that next bit of wisdom can take many forms.

Analytic tools are becoming easier to use, but that does not mean clients know how to use them. “I hammer on this,” Burns said. “Always draw a distinction between simple and basic.” 

Adobe has its “Leader and Learner” program. These events will feature clients who have learned how to use Adobe Target to best effect, passing on their experience and insights to the “learners” who are getting started, explained Burns.

“It’s getting better,” Burns said. “The tools are becoming easier.” Now a client can do their own A/B test, relying on a three-step testing process that is visually guided. 

So what does it taker to run multiple tests? Teams, noted Tsukahara. But they have to use the right tools the right way. “But to scale the program, you need to team the right product with the proper strategy, team and resources. While teams can run tests without developers or experts, the success of their program will always be amplified with the right objectives, strategy, resources and support.” he said.   

One can also skip A/B testing in favor of an automated, multi-factor approach. Sentient offers this with its Ascend product, which can test 10 to 40 changes at once, relying on an underlying algorithmic model to pick the winners, Epstein explained. 

“A/B takes a while. You are testing one idea at a time. Only one test out of seven generates a winning result,” Epstein said. Or the best approach is to politely press the client to be patient. Omniconvert will not take on any client who cannot commit at least six months to its service, Caciur noted. 

So if you want results, you will have to wait.

Total
0
Shares
Related Posts