Marketers are so awash in data today that they may feel as if they’re drowning in it. Data management platforms ingest CRM, site analytics, and third party data; APIs provide instant access to every campaign decision and outcome; social media platforms are fountains of uncontrolled consumer sentiments; and real-time bidding exchanges offer billions of advertising opportunities a la carte every day.
Yet even with these technological marvels of seemingly infinite data, marketers are bereft of insights into what really makes advertising campaigns on TV and online successful. The metrics that are widely used today are insufficient, and in many cases can mislead marketers, and even erode their credibility in the C-suite. While advertisers have become incredibly data savvy, the most difficult challenge remains determining when and how advertising causes consumers to change their behavior.
Marketers can receive a nearly endless array of metrics for every campaign. If it can be measured, it is being measured—increasingly with Big Data, manipulated in real time, optimized by artificial intelligence, and presented as an interactive infographic. But most metrics used today can be classified as only either necessary for or correlated to campaign success.
Necessary metrics, like viewability, are prerequisites—an impression that was not viewable cannot have a positive impact. Correlated metrics, like clicks and attributed conversions, are supposed to move up and down as a campaign becomes more or less successful.
But all necessary and correlated metrics have critical failures:
- An impression can be 100% viewable yet still have no impact on a consumer.
- Accidental clicks can cause high clickthrough rates.
- Attributed conversions can be due to a campaign repeatedly targeting audiences that were going to convert anyway.
In many cases, the metric that matters most is “causality”—definitively measuring whether the advertising campaign directly caused more people to engage or take action than they would have otherwise. The only way to measure causality is by conducting rigorous experiments:
1. Identify an event that defines a “success” and can be measured at an individual audience level—this can be a brand survey response, movement down the purchase funnel, or actual purchases.
2. Split audiences randomly into test and control groups, ensuring the control group is never exposed to advertising.
The causal impact of the advertising can then be determined by comparing success rates across the randomly defined test and control groups. If there is a statistically significant difference between the two groups, the advertising caused the difference.
By combining these results with carefully measured advertising spend and customer value data, marketers can calculate the true causal ROI for an advertising campaign. A positive ROI indicates it was a wise investment. A negative ROI means it was a waste.
Suddenly the marketer knows without a doubt whether or not a campaign was successful. Dollars can be allocated to channels, campaigns, and strategies that generate high ROIs; those that are not can be remedied or abandoned.
Many savvy marketers today occasionally conduct such experiments, but the only way to take full advantage is to make experimentation pervasive.
Pervasive experimentation means conducting experiments and measuring ROI for every advertising campaign. This enables robust comparisons of value generated across radically diverse channels such as social, digital, and TV. Further, by tracking results over time, marketers can observe the impact of any changes made to an advertising campaign, such as altered creative or optimized buying strategies.
Today, marketers face two roadblocks to implementing pervasive experimentation.
The first is technological. Conducting experiments in digital channels is relatively easy—test and control groups can be created with cookies or mobile device identifiers and value creation can be measured on websites or through mobile applications. What’s missing is the technology to automate the configuration and management of experiments at scale, including the ability to optimize to causal outcomes.
Further, some digital partners do not allow access to this kind of user-based control (for example, Google Search), and so marketers should be pressuring their partners to support this capability. In other channels, there is still a lot of foundational technology required. For example, TV, set-top boxes, and smart TVs must be upgraded to enable household-level control of advertising.
The second roadblock is cultural. Marketers, agencies, platforms, and publishers all have vested interests in existing flawed measurement approaches. Changing to a new standard for measurement based on experimentation will take time.
While advertisers have become incredibly data-savvy, the most difficult challenge remains causally linking that data to outcomes that really matter. No matter how much we invest in Big Data, massive changes in technologies, processes, and behaviors must occur to make pervasive experimentation commonplace. Only then will we ensure we are using our Big Data assets to create real competitive advantages.
Jeremy Stanley is the chief technology officer at Collective.