Fake news is already eroding our shared sense of reality. Now, deepfakes and AI stand to fuel disinformation and imperil democracy.
An image of a deepfake video of former U.S. President Barack Obama (AP Photo)
The unprecedented mob assault on the U.S. Capitol on January 6 represents perhaps the most stunning collision yet between the world of online disinformation and reality.
The supporters of U.S. President Donald Trump who broke into Congress did so in the belief that the U.S. election was stolen from them after weeks of consuming unproven narratives about “ballot dumps,” manipulated voting machines and Democrat big-city corruption. Some — including the woman who was shot dead — were driven by the discredited QAnon conspiracy theory that represents Democratic Party elites as a pedophile ring and Trump as the savior.
It’s tempting to hope that disinformation and its corrosive effects on democracy may have reached a high-water mark with the events of January 6 and the end of Trump’s presidency. But trends in technology and society’s increasing separation into social media echo chambers suggest that worse may be to come.
Imagine for a moment if video of the Capitol riot had been manipulated to replace the faces of Trump supporters with those of known protestors for antifa, a left-wing, anti-fascist and anti-racist political movement. This would have bolstered the unproven story that has emerged about a “false flag” operation. Or imagine if thousands of different stories written by artificial intelligence software and pedaling that version of events had flooded social media and been picked up by news organizations in the hours after the assault.
That technology not only exists. It’s getting more sophisticated and easier to access by the day.
Deepfakes, AI can erode our trust in democracy.
Deepfake, or synthetic, videos are starting to seep from pornography — where they’ve mostly been concentrated — into the world of politics. A deepfake of former President Barack Obama using an expletive to describe Trump has garnered over eight million views on YouTube since it was released in 2018.
Most anyone familiar with Obama’s appearance and speaking style can tell there’s something amiss with that video. But two years is an eternity in AI-driven technology, and many experts believe it will soon be impossible for the human eye and ear to spot the best deepfakes.
A deepfake specialist was hailed early last year for using freely available software to “de-age” Robert DeNiro and Joe Pesci in the movie “The Irishman,” producing a result that many critics considered superior to the work of the visual-effects supervisor in the actual film.
In recent years, the sense of shared, objective reality and trust in institutions have already come under strain as social media bubbles hasten the spread of fake news and conspiracy theories. The worry is that deepfakes and other AI-generated content will supercharge this trend in coming years.
“This is disastrous to any liberal democratic model because in a world where anything can be faked, everyone becomes a target,” Nina Schick, the author of “Deepfakes — The Coming Infopocalypse,” told U.S. author Sam Harris in a recent podcast.
“But even more than that, if anything can be faked … everything can also be denied. So the very basis of what is reality starts to become corroded.”
Governments are under pressure to do more to combat disinformation.
Illustrating her point is reaction to Trump’s video statement released a day after the storming of Congress. While some of his followers online saw it as a betrayal, others reassured themselves by saying it was a deepfake.
On the text side, the advent of GPT-3 — an AI program that can produce articles indistinguishable from those written by humans — has potentially powerful implications for disinformation. Writing bots could be programmed to produce fake articles or spew political and racial hatred at a volume that could overwhelm text based on facts and moderation.
Society has been grappling with written fake news for years, and photographs have long been easily manipulated through software. But convincingly faked videos and AI-generated stories seem to many to represent a deeper, more viral threat to reality-based discourse.
It’s clear that there’s no silver-bullet solution to the disinformation problem. Social media platforms like Facebook have a major role to play and are developing their own AI technology to better detect fake content. While fakers are likely to keep evolving to stay ahead, stricter policing and quicker action by online platforms can at least limit the impact of false videos and stories.
Governments are coming under pressure to push Big Tech into taking a harder line against fake news, including through regulation. Authorities can devote more funding to digital media literacy programs in schools and elsewhere to help individuals become more alert and proficient in identifying suspect content.
When it comes down to it, the real power of fake news hinges on those who believe it and spread it.
Three questions to consider:
- How can technology be used to spread fake news?
- Why is disinformation potentially harmful to democracy?
- How do you think the rise of AI technology will affect the type of information people consume?
Stuart Grudgings reported from dozens of countries in a 19-year career with Reuters. As Malaysia bureau chief, he contributed to a Pulitzer Prize-winning series of stories on the plight of Myanmar’s Rohingya Muslims.