Survey: Concerns About Deep Fakes Spike As Election Approaches

cybersecurity
(Image credit: Image by Pete Linforth from Pixabay)

SAN JOSE, Calif.—Online security solution provider McAfee has released new research exploring the impact artificial intelligence (AI) and the rise of deepfakes are having on consumers during elections. 

The data, from research conducted in early 2024 with 7,000 people globally, reveals that nearly 7 of 10 (66%) people are more concerned about deep fakes than they were a year ago.

In addition, nearly one in four Americans (23%) said they recently came across a political deepfake they later discovered to be fake. The actual number of people exposed to political and other deepfakes is expected to be much higher given many Americans are not able to decipher what is real versus fake, thanks to the sophistication of AI technologies. 

“It’s not only adversarial governments creating deepfakes this election season, it is now something anyone can do in an afternoon,” said Steve Grobman, McAfee’s chief technology officer. “The tools to create cloned audio and deepfake video are readily available and take only a few hours to master, and it takes just seconds to convince you that it's all real. The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content, particularly during a critical election year. In many ways, democracy is on the ballot this year thanks to AI.”

  • Misinformation and disinformation emerged as key concerns for Americans, with the recent incident involving a fake robocall from President Joe Biden serving as an example of what could become a widespread issue. When asked about the most worrying potential uses of deepfakes, election-related topics were front and center. Specifically, 43% said influencing elections was their top concern regarding deep fakes, 37% said undermining public trust in media, 43% said impersonating public figures – for example, politicians or well-known media figures – and 31% said distorting historical facts.
  • More than half (53%) of respondents say AI has made it harder to spot online scams.
  • The vast majority (72%) of American social media users find it difficult to spot Artificial Intelligence generated content such as fake news and scams.
  • Just 27% of people feel confident they would be able to identify if a call from a friend or loved one was in fact real or AI-generated.
  • In the past 12 months, 43% of people say they’ve seen deepfake content, 26% of people have encountered a deepfake scam, and 9% have been a victim of a deepfake scam.
  • Of the people who encountered or were the victim of a deepfake scam:
  • Nearly 1 of 3 (31%) said they have experienced some kind AI voice scam (for example, received a call, voicemail or voice note that sounded like a friend or loved one – that they believed was actually a voice clone.
  • Nearly 1 of 4 (23%) said they came across a video, image, or recording of a political candidate – an impersonation of a public figure – and thought it was real at first.
  • 40% said they came across a video, image, or recording of a celebrity and thought it was real.
TOPICS
CATEGORIES
George Winslow

George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.