One story we returned to throughout 2017 is the risks posed to brand safety through association with divisive rhetoric, false news stories, and terrorist activities — in addition to the familiar pitfalls of pornography and violence.
It’s anecdotally obvious that such threats have been growing, with the slightest missteps by brands, or even inadvertent placement of messages adjacent to hate speech or worse, triggering adverse reaction from consumers. Of course, for consumers, the ability to speak directly back to brands on social media may be a net positive; and some brands have been able to ride the wave by pro-actively advancing socially responsible messages.
But it’s a minefield; and a recent report conducted by Digiday and GumGum (more about them later), “BrandRX: The New Brand Safety Crisis,” suggests just how densely packed with mines the minefield is.
Based on a survey of over 200 (mainly U.S.) industry professionals, in Q4 2017, the report concludes that an astonishing 75% of brands reported at least one brand-threatening incident in the past year. These may not be on the scale of the United Airlines video, or top brands finding themselves supporting ISIS recruitment on YouTube, but they do suggest how pervasive the threat is.
Interestingly, while most of those surveyed expressed concern about proximity to hate speech (34%) and violence (13%), actually incidents seem to have been sparked by unexpected disasters, divisive politics, or false news reports: the same number, 39%, reported incidents in each of those three categories. What’s more, blacklisting doesn’t seem to have been effective in keeping programmatic ads away from toxic environments, not least because of the number of “middle men” involved in the ad supply chain.
Despite everything, 15% of brands surveyed have no protective solutions, while 45% have had solutions in place for less than a year, according to the report. One particular risk, of course, is association with risky images (swastikas, guns, you can fill in the rest), especially given social media’s heavy lean on visual content. AI has made huge strides in recent years in enabling large-scale analysis of visual content. This is where GumGum, an “applied computer vision company” has skin in the game.
A number of vendors, of course, have been making claims for computer vision capabilities (Salesforce’s Einstein Vision is just one of the more prominent). There’s been an emphasis on the positive use case of being able to join and amplify conversations about products and services where social interaction is based predominantly on images rather than text. But the brand risk use case is important too.
The report observes that “relatively few brands and agencies are using image recognition technology…to screen for brand unsafe pictures.” And if the tech use is low among brands and agencies, it’s almost completely absent among publishers.
The next frontier, of course, is using deep learning to analyse and identify problems in videos, not just in static images. Vendors (including GumGum) are scanning video; but as Errol Apostolopoulos of Crimson Hexagon told me last year, “people who can do image analysis can do video as well, because it’s just multiple frames. It’s really just the volume of processing that incurs cost.”
Just imagine the volume.