Hitmetrix - User behavior analytics & recording

Marketing in the Summer of Hate

No mainstream marketer needs to know about 8chan, and we won’t link to it here. Self-described as presenting the “darkest reaches of the Internet,” it’s a Reddit for the wild side; for hatred, pornography, and abuse. 

It’s the site where the anti-immigrant manifesto, allegedly written by the El Paso murderer, was posted. Security vendor Cloudflare stopped supporting 8chan at the weekend, following which the site has seen a number of outages. It’s founder, Frederick Brennan, who is no longer associated with the site, called for it to be shut down. He created it as a resort for free speech online, considering an earlier set of message boards, 4chan, to be too heavily moderated.

In response, Andrew Torba, who created the website Gab (the Pittsburgh synagogue shooter was a member), said: “If 8chan is shutdown here is what will happen: someone else will spin up a new imageboard, say 20chan or whatever. People will flock to that.” Probably true, but these marginal, extremist ventures into anti-social media are only part of the problem.

There’s still a home for hatred, after all, on platforms like Facebook, YouTube, and Twitter — and those are platforms marketers should, and do, care about. Millennials and post-millennials are all about following brands on social media; 90 percent of social media users have used the channel to engage with brands; 50 million small businesses use Facebook for marketing purposes; and so on and on

But are these platforms nice places to be any more? In an excellent article published yesterday, Recode exposed the glacial slowness with which the major social media sites have responded to white supremacist, or neo-Nazi, content. Facebook banned this content as recently as March this year (the Charlottesville Unite  the Right rally was held, believe it or not, two years ago). YouTube banned neo-Nazis (and Sandy Hook trolls) in June. 

David Duke still has a Twitter account.

The fact is, enforcement of these sparkling new policies is so lax as to be almost non-existent. It takes a few minutes, using search terms like “illegals” and “borders,” to surface viciously racist updates, cartoons, and videos on Facebook. Twitter, policies notwithstanding, remains in practice hospitable to racist slurs, and even threats of violence.

At what point are audiences going to be turned off? Facebook’s U.S. user base, and especially the younger demographic, has shown some recent decline; but Zuckerberg’s backstop, of course, is the burgeoning Instagram audience (although Instagram too is not immune from toxic speech). Twitter’s audience is much smaller than Facebook’s, both globally and in the U.S.. 80 percent of Twitter’s users are not U.S.-based. A small percentage of the audience are the power users, visiting the site daily.

But let’s be honest; those audiences are going away any time soon.

Regulators might yet have a say in the future of these platforms. The FTC hit Facebook with an affordable, but non-trivial, $5 billion fine for its misuse of user data last month. And one of the platform’s co-founders, Chris Hughes, has made an articulate case for treating it as a monopoly, and breaking it up.

There’s another legislative solution available which would be a game-changer when it comes to posting hate speech, defamation, and content like deep fake videos. It would mean taking a fresh look at a brief, but immensely significant, passage in Section 230 of the 1996 Communications Decency Act (CDA):

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”

In the nascent days of the World Wide Web, such language relieved web masters of the responsibility for reviewing and checking — editing — everything posted by other users on their sites. It could be seen as preventing the internet from being crushed at the outset by expensive litigation. Indeed, it has its parallel in similar language in the 1998 Digital Millenium Copyright Act, which limited the impact on web masters of copyright infringements posted by site users.

But the latitude granted web masters in the CDA has already been limited in certain ways, in particular to remove immunity for hosting content related to sex trafficking. Should there be continued immunity for hosting calls for violence against individuals or groups (such speech, importantly, is not protected by the First Amendment)? At the very least, for those websites which can easily afford to shoulder responsibility.

What about free speech? What about the reason platforms like 4chan and 8chan were created? What about so-called “platform neutrality”? But the suggestion here isn’t that content should be banned, but that those platforms which publish the content (and these platforms are the most pre-eminent publishers of our day) should take the same responsibility for content that newspapers, magazines, and broadcasters do.

It’s in the interest of marketers to call for this kind of profound reform.

Total
0
Shares
Related Posts