Hitmetrix - User behavior analytics & recording

Little Things Make a Big Difference

It's no secret that e-mail offers many benefits for marketers.

Included are all of its direct response aspects in addition to dynamic messaging, segmentation and tracking and analysis tools that seem to make what can be a cumbersome process in the offline world downright easy. What's not to like about splitting a campaign six (or even 60) ways in order to learn how to market most effectively to your customers and prospects through this channel?

Because of the possibilities this medium gives us, I've been a big proponent of aggressive, regular e-mail campaign testing for some time. Since test results are typically final three to five days past deployment and so many testing tools are available, we can learn many things about our audience and its behavior in less time than we can offline. Because of this, the mandate has been: Test something with every outbound effort. Just be sure to test.

The challenge is that though the medium by its nature makes an A-Z split test, for instance, easier to implement than an offline campaign with the same criteria, the heartier e-mail campaigns still take people, with their brains and time, to adequately plan, manage, track and apply.

If you roll out a new e-mail message, newsletter or promotion every week, it may be that you've got the resources to figure out how well your last-3-month responders fared against the last-6-month and 12-month, etc. Maybe you have the time to drill down further to create and implement unique tracking codes based on your audience's industry, job title, geographical area and more so you can determine best selects for enhancements to future campaigns. Those are all fine things to test. But no matter how sophisticated a platform you may have — either internally or outsourced — a need remains for internal manpower to manage it all.

In this age of increasingly limited resources due to downsizing and the economy, it's not always feasible to create multiple-celled test campaigns every week. So how do you continue to test and learn without taking up too much time and without taxing your internal resources?

It boils down to the little things.

You can build increased response over time by testing and applying just one small component with every outbound effort.

Neither the test nor its components need to be fancy. Test a minor change in your format or layout. Or test a new price point (if there is a paid call to action) within your message. Or perhaps it's a different appeal or pain point, or even a slight variation of your offer.

Of course, the same rules apply. You still need to ensure you can read the results. Make sure those test quantities are statistically valid, the definition of which can vary dramatically depending on your business. For some marketers, a single test cell must project a certain number of responses to be valid. Many use 50 to 100 net responses as the bar. For others, it's the percentage of a universe, meaning that some marketers want to keep each of their test cells to no less than a certain predetermined percentage of their total house list quantity.

The good news is that when you're testing a single small component, test cell size and validity requirements typically don't apply. However, to be safe, if you have a niche list, aim for each test cell to yield at least 100 responses — that is, 100 completions of each unique call to action within your message. That could mean you are estimating that 100 people will fill out your survey, download a trial of your software's latest version or sign up for three free issues of one of your other subscription offers. Whatever completion means to your business, you can project the number of responses based on previous efforts.

If you send a weekly, templated newsletter wherein you normally get 200 responses per issue, you should be able to split-test between a control (the current winning version) and your test, which will include just one change. If you test more than one component in your test message and that test message beats the control, you will be hard-pressed to determine the winning factor.

Another bonus to simple, regular tests is that even if you were to improve response by the smallest fraction of a percentage point per issue, you would still be in constant fine-tuning mode and would see a noticeable overall bump in response within a few mailings.

I have seen and/or worked with companies that applied this testing approach regularly, particularly in those high season times when they needed to allocate their resources differently. One nonprofit, for example, regularly broadcast a free e-letter that provided mainly content. The letter also contained a subtly positioned donation request in each issue.

Upon implementing a scaled-down testing strategy, completions of the donation call to action increased by almost 30 percent over six weeks. The first week the test consisted of moving the location of the ask to a more prominent location. The second week they tested the same donation request location with a different graphic embedded within the same location. Another week, they tested a higher price point, knowing that the number of donors would decrease, but assuming that the increased minimum donations would more than make up for it. And so on. The marketers only had to produce two variations of the same letter each week. And that little bit of extra time more than made up for the value of the tests. The 30 percent lift brought in additional revenue.

Testing is a good thing and can yield powerful lessons. However, when testing becomes cumbersome, the results often are never applied to subsequent campaigns or are simply not worth the time. When resources are strapped, focus on conserving all that energy. Test small things that you can apply easily and now. Though results may not be stellar, at least you've taken those small steps that will pave the way for a modicum of improvement, and with a minimum of effort

Total
0
Shares
Related Posts