Lab Manager | Run Your Lab Like a Business
Pixelated human face represnting AI
iStock, ArtemisDiana

Deepfake Videos during Russian Invasion of Ukraine Could Undermine Trust

Thematic analysis of 2022 tweets highlights impacts of wartime deepfakes on emotions, trust, and conspiracy theories

by PLOS
Register for free to listen to this article
Listen with Speechify
0:00
5:00

A new study explores themes in Twitter discussions of deepfake videos related to the Russian invasion of Ukraine, highlighting the potential for real videos to be mistaken for deepfakes and for deepfakes to fuel conspiracy theories. John Twomey of University College Cork, Ireland, and colleagues present these findings in the open-access journal PLOS ONE.

Created using artificial intelligence, deepfake videos typically feature a person saying and doing things they never actually did in real life. Deepfake technology has advanced considerably, sparking concerns about its potential harms. Deepfakes related to the Russian invasion of Ukraine represent the first instances in which deepfakes have been used in attempts to influence a war.

Get training in Positive Communication and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Positive Communication Course

To better understand the potential harms of deepfakes, Twomey and colleagues analyzed Twitter discussions about deepfakes related to the invasion. They used a qualitative approach known as thematic analysis to identify and understand patterns in the discussions, which included a total of 1,231 tweets from 2022.

The researchers found that many of the tweets expressed negative reactions to news about deepfakes. For instance, some tweets expressed worry, shock, or confusion about news related to a deepfake that falsely depicted Ukrainian President Volodymyr Zelensky surrendering to Russia. However, some tweets overlooked potential harms or had positive reactions to deepfakes directed against political rivals, especially deepfakes created as satire or entertainment.

Some tweets warned about the need to prepare for increased use of deepfakes, discussed how to detect them, or highlighted the role of the media and government in rebutting them. However, some tweets suggested that deepfakes had undermined users’ trust to the point that they no longer trusted any footage of the invasion. Some tweets linked deepfakes to users’ apparent belief in conspiracy theories, such as deepfakes of world leaders being used as cover while they were actually in hiding, or that the entire invasion was fake, anti-Russian propaganda.

This analysis suggests that efforts to educate the public about deepfakes may unintentionally undermine trust in real videos. The authors note that their findings and future research could help inform efforts to mitigate the harms of deepfakes.

The authors add: “Much previous research on deepfakes has been concerned with potential future harms of the technology. However we have focused on how deepfakes are already impacting social media as we have seen during Russia's invasion of Ukraine. Our research shows how deepfakes are undermining faith in real media and are being used to evidence deepfake conspiracy theories.”

- This press release was provided by PLOS