Everyone is talking about deepfakes, and rightly so. Deepfake technology represents a broad threat to privacy, politics, and even business. A photo or video might seem real, but it could actually be an illusion, a pernicious patchwork of relevant media materials and AI techniques.
The method originally came to the public’s attention through pornography. Techies superimposed celebrity faces onto naked bodies. Earlier this year, people also used the tool “FakeApp” to ironically insert Nicolas Cage into classic movies. FakeApp makes this type of trickery easy. Users can automatically extract a large dataset of images from an existing video. Those images can then be fed into the system and merged with another sequence, creating a deepfake.
Could deepfakes swing a vote?
However, deepfaking is no longer relegated to smut and memes. Deepfakes could be used to sow political discord, thwart campaigns, or even incite global conflicts. Additionally, these AI-assisted, underhanded tactics could interfere with the free market and tarnish brand equity. CEOs could be directly targeted. There’s ample reason for everyone to be on edge.
Senator Marco Rubio has warned voters that deepfaking tech could conceivably sway election outcomes. A video might quickly go viral in which a candidate is seemingly espousing a controversial or unpopular position. It’s possible that specific groups of voters could be targeted, to ensure that they see the personally offensive messaging and react. If the strategically-altered video becomes popularized on the eve of an election, it could change the narrative and flip people’s votes. By the time the video is exposed as a fraud, it would be too late.
Rubio concluded, “You put all that together and what you have is not a threat to our elections, but a threat to our republic, a Constitutional crisis unlike any we have ever faced in the modern history of this country.”
It’s easy to understand how a falsified video could do damage, but other scenarios are equally corrosive. A public figure could undermine the credibility of a video recording that is both real and damaging by labelling it a “deepfake.” A conspiracy theorist could discount factual reporting and rewrite history.
Ironically, sometimes the best solution to a technologically-created problem is actually more tech. Advanced tools are able to detect the subtle details that expose deepfaked content, such as a lack of blood flow under the skin or unusual blinking patterns. However, as these forensic techniques for identifying computer-generated fakes become widely known, the deepfake mischief-makers can adapt accordingly.
If they’re so inclined, these mischief-makers could create a massive public relations nightmare for brands. People are currently talking about deepfakes within a political context but the business landscape isn’t immune.
On a corporate level, what if brands sabotaged other brands by putting words in the mouths of their CEOs? Sure, Coca-Cola probably isn’t going to do this to Pepsi. If the scheme got uncovered, there would be a big liability and loss of consumer trust. But what about a small startup based overseas, desperately trying to gain market position by taking on the big guys? If it can’t be easily traced back, what do they have to lose?
At some point, an unethical entrepreneur might be crazy enough to do this. And there might not even be repercussions. The U.S. government has struggled to extradite Kim Dotcom, even though he has been accused of criminal copyright infringement on a large scale.
We’ve seen the ways that both consumers and markets have reacted to controversial messaging. When executives say the wrong thing, their brands usually take a hit. An ill-considered remark, or tweet, can even affect the financial health of a multinational corporation.
The founder of yoga apparel brand Lululemon once said that apparent problems with his company’s product were actually due to “some women’s bodies.” Wall Street analysts downgraded the stock to “underperform” and the brand was flamed on Twitter. Former Abercrombie & Fitch CEO Mike Jeffries made a similar gaffe when he was quoted as saying that his brand only wanted to market to cool, good-looking people. When he eventually resigned, shares jumped 8 percent.
BP CEO Tony Hayward said “I would like my life back” after his company instigated the Gulf of Mexico oil spill. Going back even earlier, Gerald Ratner gained notoriety after he jokingly denigrated his own jewellery company. Sales plummeted, the company changed its name, and Ratner was fired.
Simple morsels of media content can produce a big impact. These particular instances were real. But what if, one day, this type of controversy is explicitly engineered?
This has actually happened before, without the use of sophisticated tools. In the 1990s, there were rumors that Tommy Hilfiger told Oprah that he didn’t want minorities to wear his company’s apparel. That rumor has been repeatedly debunked. Oprah herself said it was completely untrue. And yet, real damage was done. 20 years later, Tommy Hilfiger still found himself trying to set the record straight on The Wendy Williams Show.
“Somebody came up with this rumor and it spread over the internet, and it hurt my heart and soul,” he explained. Wendy Williams joked that the rumor might have originated with a big name competitor.
This outright fabrication provides a compelling precedent. Brands should worry about deepfakes because someone once put words in the mouth of Tommy Hilfiger and people believed it, even without any video.
Deepfakes and the law
What about legal recourse?
Ryan J. Black and Pablo Tseng, lawyers and technology experts at the business law firm McMillan LLP, wrote a bulletin addressing the legal mechanisms that could be used in the war against deepfakes. Causes of action might include copyright infringement, defamation, violation of privacy, appropriation of personality, the criminal code, human rights complaints, intentional infliction of mental suffering, and harassment. However, the authors stated that it may be difficult to locate or identify the party responsible, given the anonymity offered by many internet sites.
When DMN asked Ryan J. Black which causes of action would be available to a sabotaged brand, he mentioned that companies aren’t allowed to be false or misleading in their practices and advertising. Additionally, regulators would be able to look at it in a commercial context, not a political or free speech context.
“The executive who was being falsified would almost certainly have a defamation action. There could be other criminal or tort actions depending on the nature of the deepfake,” said Black.
Ryan J. Black also emphasized that the problem would be nothing new.
“I don’t know that you need deepfakes to do this with the way that social media works today,” he said. “Because we’ve seen quotes just misattributed to people, intentionally or not, that spread like wildfire all over the internet.”