At the beginning of the month, a war of words broke out between global food brand Kellogg and the far right news website Breitbart.com. Kellogg, along with some other brands, announced that it was pulling advertising from the site because of concerns about “hate speech.”
Breitbart, in turn, called on its followers to boycott Kellogg’s products, calling the cereal manufacturer’s decision “un-American.”
With many brands continuing to advertise on Breitbart—including top-line tech and marketing tech brands—and similar concerns being expressed about ads appearing on “fake news” sites, this has become much more than a story about a limited boycott of one publisher.
It goes right to the heart of adtech, and especially the programmatic buying and serving of ads, which was supposed to take ad inventory trading out of human hands in order to reach huge audiences at high speed. Indeed, some brands have blamed the technology for delivering ads to controversial sites. Is an automated adtech environment compatible with brand safety?
For Most Adtech: Business as Usual
Using simple online tools like a browser extension provided by Ghostery, it’s easy to get a sense of which adtech brands are tracking any website. Taking Breitbart as an example, it’s easy to see that many of the biggest names in adtech are tracking it. What’s more, tracking my presence on Breitbart has been strikingly effective in customizing my Breitbart experience (limited, I should say, to research for this story).
Each time I visit, I’m shown ads for some of the major tech and marketing tech brands I’ve recently written about. And it’s no coincidence: If I use private browsing to visit Breitbart anonymously, I’m served untargeted ads—for example for NRA merchandise.
One adtech company you won’t find serving ads to Breitbart is AppNexus. Last month, the New York-based programmatic platform announced that it was barring Breitbart from using its adserve tools because the site violated its hate speech guidelines. Said Josh Zeitz, AppNexus VP of corporate communications: “We did a human audit of Breitbart and determined there were enough articles and headlines that cross that line, using either coded or overt language.”
Nothing New Here
I spoke with Zeitz about the AppNexus decision. He stressed that: “There’s actually no new news here.” From the company’s inception, he said, AppNexus has barred domains. Examples include sites featuring pornography, graphic violence, piracy, as well as hate speech designed to incite violence against minority groups. He also said AppNexus exercised quality standards.
“We enforce our marketplace standards,” he said. “We don’t presume to tell other companies what to do.” As for barring Breitbart, “We were asked a specific question about a domain in the news,” he said. AppNexus uses human audit and other procedures to screen sites—a process which, he confirmed, never ends.
Zeitz is right, of course, that the dilemma is nothing new. Whether it be porn or piracy, brands and ad platforms have been trying for years to avoid association with illegal or distasteful content.
It Doesn’t Need to Be This Way
In fact, Eric Franchi, co-founder of Undertone (which both creates and delivers digital advertising experiences), was eager to express his incredulity that the current situation even exists. There are two extremes, he said: One in which there’s “massive service of ads to tens of millions of different websites, targeting audiences with no regard for content.” At the other end of the continuum, is old school “hand picking” the sites and context in which ads will appear. “But you don’t need to have one extreme or the other,” he said. “Simple practices, tactics and technologies are available” to give brands the scale they need while maintaining quality and supporting brand standards.
Franchi laughed heartily when I asked him whether I suggested brands had already confronted this problem in the context of adult or PG-13 sites. To him, it was obviously true. “We started this business in 2001. I go back to those days of blogs, user-generated content, when individuals were starting to post online. The same basic tactics we recommended a decade ago are still sound.”
The two obvious ways for brands to get a handle on all this are blacklists, which exclude specified sites—Kellogg, of course, has just blacklisted Breitbart; and whitelists, which define a finite selection of sites with which they’re willing to be associated. The whitelist idea, Franchi said, “is fundamentally correct.” Blacklisting, he suggested, is purely reactive, but a whitelist can be compiled which ranks sites and apps for quality.
“For brands which care about quality,” he said, “there’s probably a pretty short tail of vetted, premium sites.” I asked whether the whitelist strategy depresses prospects for online publishing start-ups. “Not necessarily,” he said. “If [the site is] a real hit, has a lot of users, is well-known, it will naturally make its way onto whitelists.” Publishers should “evangelize quality and their brand-safe nature.”
It’s Advertisers’ Choice
Alice Lincoln, VP of data policy and governance at MediaMath, emphasized that it’s the advertiser’s choice. MediaMath, also New York-based, created the first DSP, and today supplies a range of digital advertising technologies and services. “There are three ways that advertisers can protect themselves from content they don’t want to be associated with.”
- Automatic removal from “beyond the pale” websites engaged in fraudulent or illegal activity. (Everyone I spoke to for this article actively sought to avoid sites of this kind)
- Complete control based on client-created whitelists or blacklists.
- Leverage of contextual taxonomies to avoid serving ads alongside specified content (some brands, for example, don’t want to run ads alongside news about natural disasters). MediaMath offers this option, in partnership with third party vendors like—DoubleVerify—that specialize in authentication of digital media quality
The level of control is determined by clients. “What is the technology there to do?” asked Jesse Comart, MediaMath VP and head of global communications. “It’s there to do what brands want—within reason, and within regulatory and legal limits—to achieve their [desired] outcome.” Not all platforms, he said, offer the same potential degree of control as MediaMath.
In fact, I heard from several sources that there’s plenty of negligence in the adtech space when it comes to brand reputation, in relation to both direct response advertisers (“They don’t care,” said one source); and platforms allegedly engaged in click fraud.
On the Other Hand
In a number of respects, however, the adtech industry doesn’t speak with one voice on this issue. I encountered strong dissent on the whitelist approach from Patrick Hopf, founder and president of Montreal video advertising programmatic platform SourceKnowledge.
“It’s completely impractical,” he said. He also agreed it’s an old problem. There was a backlash years ago from advertisers against P2P networks engaged in piracy. “It was faced at the time,” he said, “and addressed, but it’s an inexact science. You try to control it as best as possible.” As for adult sites, “you’d be surprised.” Mainstream ads still appear in association with R-rated content; perhaps less than they used to.
As for concerns about political content: “Who is the policeman of the internet? Who is making the decisions? Brands aren’t going to do the work.” Hopf was concerned about self-appointed gatekeepers, especially given recent moves by ad blockers to blacklist publishers that don’t comply with their rules. SourceKnowledge, of course, will blacklist sites on request, and, like other platforms, will screen out fraudulent sites. “We’ll do the whitelist,” he conceded, “when agencies want to,” said Hopf.
Nevertheless, he said, “it’s a slippery slope.” Refusing to track certain sites would mean SourceKnowledge couldn’t do its job. “If we find someone who was interested in a shoe store suddenly going to Breitbart,” do we stop tracking them? “I have no easy answer,” he conceded.
The Bottom Line
So, it’s a problem as old as adtech itself, but it’s currently in the news. In some ways it’s intractable, but in other ways, it’s easy to fix. It’s up to advertisers to make the decisions, but adtech platforms should care about their clients.
At the end of the day, of course, it was easy for the industry to agree, broadly at least, on what constitutes fraud, illegal activity, and even adult content. Editing for politics, or for truth in news, is what’s making people jumpy. Imagine, Alice Lincoln said, looking at a website’s constantly changing content and trying to decide if a particular brand would be okay with it. “Even if we thought that approach was right,” she said, “it’s totally impractical.”