It’s about time…
No one could be blamed if they began to question why major social media platforms use advanced data analytics for every commercial purpose under the sun, other than identifying and eradicating extremist content.
It appears, things are about to change. Yesterday, the Facebook announced it has begun to use a boosted AI system to identify potential terrorist postings and accounts on its platform, and to automatically delete or block them.
Among the AI techniques being used by Facebook is image matching, which compares photos and videos people upload to Facebook to “known” terrorism images or video. Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that the company shares with YouTube, Twitter and Microsoft.
Facebook is also developing “text-based signals” from previously removed posts that praised or supported terrorist organizations. It will feed those signals into an NLP program which, over time, will teach itself detect similar posts.
The company, according to the announcement, intends to continually update the software, but for the time being will focus on the following:
- Language understanding
- Removing terrorist clusters
- Cross-platform collaboration
In years past, Facebook and other social media companies relied heavily on the manual effort of human moderators to identify and potentially block or delete offensive content. And even as algorithms and automated systems became key to controlling the news feed, and even detecting child pornography, the hunt for terrorist content continued to be relegated humans.
While an automated system is an improvement, Facebook admitted in the announcement that “AI can’t catch everything.” To understand the more nuanced cases, Facebook will maintain a team of human experts to review real-world threats.
Facebook also reiterated the organization’s December announcement on ongoing collaboration with Twitter, Microsoft and Google-owned YouTube, to support a digital database which will “fingerprint” flagged terrorist content.
These efforts, including the collaboration, should also create a safer space for brands, some of which have suffered bruising PR problems from controversial ad placement.