When it comes to deepfakes, what if fact could be distinguished from fraud? University of Sydney neuroscientists have discovered a new way: they have found that people’s brains can detect AI-generated fake faces, even though people could not report which faces were real and which were fake.
Deepfake videos, images, audio, or text appear to be authentic, but in fact are computer generated clones designed to mislead you and sway public opinion. They are the new foot soldiers in the spread of disinformation and are rife – they appear in political, cybersecurity, counterfeiting, and border control realms.
For example, in 2016, a Russian troll farm deployed over 50,000 bots on Twitter, using deepfakes as profile pictures, to try to influence the outcome of the US presidential election. Some research suggests they may have boosted Donald Trump's votes by over three percent. More recently, a deepfake video of Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on Twitter, Facebook, and YouTube, muddying people’s understanding of the war in Ukraine with potential, high-stakes implications.