Cameron MacDonnell is maniacal about testing. The marketing director for Just Pets Superstore insists on testing every campaign before it goes out—and retesting when the results aren’t clear. His mantra is to set aside 20% of the audience for testing.
For one recent campaign, MacDonnell and his team decided to take a slightly different approach to an acquisition campaign. Along with its 50 retail stores, Just Pets has a robust website that many of its customers use to reorder favorites like bulk pet food and treats. Targeting prospects who mirror the site search behaviors of those high-value customers, MacDonnell’s team created a programmatic campaign to deliver a promotion—$10 off your first order of $50 or more—to convert those prospects.
Normally, such promotions would have a 60-day response window. MacDonnell wanted to test a 30-day expiration date for the offer. The goal was to get the same number of prospects as the typical campaign, but get them in the funnel faster. The expectation was that 25% of those prospects would become repeat purchasers in the short term that would evolve to become loyal customers.
As usual, MacDonnell’s team conducted a test first. Response rates were the same, and the promotion met the objective of getting the same number of new customers in the funnel sooner, but after tracking their behaviors for three months, MacDonnell’s team found that they weren’t loyal. During that time, the 90% of those new customers bought one more time and then never came back; about 7% bought twice. Basically, they took the promotion and ran. Only 3% became repeat customers—a significant decline from the usual results.
MacDonnell is unsure as to whether the test results are an anomaly. So, he needs to decide whether to retest the campaign to determine why these high-value lookalike prospects didn’t convert to become repeat customers, or just run with the program as is and get the short-term revenue.