Hitmetrix - User behavior analytics & recording

The ABCs of A/B Split Testing

Imagine this scenario: A “freemium” dating site with millions of members and a mobile app is offering 15% off on an upgrade to a premium membership. The company identifies two segments of customers it believes are likely to upgrade—very frequent site users and those who report a high income or list a high-paying job on their profile.

The site then deploys two variants of the same marketing push notification to some members of these two segments in order to see which version results in more upgrades:

A.“Big Special! 15% off on Ultra Membership!”

Message A had a 40% open rate and a 20% conversion rate. For every 100,000 messages sent, 8,000 were clicked and resulted in an upgrade.

B. “Ultra Membership 15% off! Hurry, Deal Ends Soon!”

Message B had 30% open rate and a 30% conversion rate. For every 100,000 messages sent, 9,000 were clicked and resulted in an upgrade.

Though it had a lower open rate, message B yielded better ROI because it had a higher coupon redemption, or conversion rate (9% versus 8%). The service deploys this message to all members in both of the selected segments, resulting in hundreds of thousands of new Ultra memberships—and a hefty chunk of profit—for the site.

This is A/B split testing, which marketers have been using for decades to measure the effectiveness of their print, television, and desktop-based Web campaigns. That’s because it produces concrete, actionable evidence of what works, so marketers don’t have to rely on educated guesses that may or may not produce desired business results.

This analytics tool is starting to make its mark on mobile—and just in time.

Around the world, there are more than 1.2 billion mobile subscriptions in use, with that number expected to grow to 9.3 billion by 2018. That’s 2.3 billion more than the whole world’s current population. In response to this boom, most marketers are planning to increase their mobile budgets this year. Not only that, but mobile marketing budgets are expected to reach a yearly $4.4 billion by 2015—and that’s just in the U.S. alone.

But if marketers don’t measure the effectiveness of their mobile messaging, that extra spend—and any potential ROI—is likely to evaporate.

For maximum effectiveness of any mobile campaigns, messages need to be both timely and relevant. Messages that are too frequent or don’t bring any added value simply turn most consumers off, leading them to unsubscribe from marketing updates.

A/B split testing of push notifications, SMS, and mobile email messages can be one of the most powerful tools of engagement in any marketer’s kit as they look for data beyond standard metrics (for example, open rates), such as organic versus prompted opens, time since last open, app session times, redemption rates, and other mobile-specific conversions focused on engagement and ROI.

Marketers who split-test their mobile messages often follow up by using collected response data to retarget customers who either ignored the message altogether or opened it without taking further action. Retargeting helps marketers significantly maximize the value of their audiences, boosting customer retention and sales in the process.

Every time mobile marketers A/B test and retarget, they gather information that allows them to paint an increasingly detailed and accurate snapshot of each user or customer and his or her preferences. The better marketers get to know their individual users, the more relevant and enticing their offers will be.

Even when marketers believe their push, SMS, or mobile email campaigns are doing well—and they may even be doing even better than well—there is always room for improvement. A/B split testing for mobile messaging campaigns ensures that every message a marketer sends is more powerful, engaging, and personalized—in other words more effective—than the last.

Brendan O’Kane is CEO of OtherLevels.

Featured Post: Marketing: Meaning, Strategies, and Careers

Total
0
Shares
Related Posts