AI and Election Disinformation

AI is being used in some pretty concerning ways to spread disinformation during elections.

Here are a few key methods being employed:

  1. Deepfakes: AI-generated deepfake videos and audio can create convincing but false representations of political figures, making it seem like they said or did things they never did. This can mislead voters and create confusion A.
  2. Automated Bots: AI-powered bots can flood social media with false information, amplifying misleading narratives and making them appear more credible due to the sheer volume of posts B.
  3. Targeted Misinformation: AI can analyze voter data to create highly targeted misinformation campaigns. This means false information can be tailored to specific groups of voters to influence their opinions and decisions C.
  4. Fake News Generation: AI can generate fake news articles that look and sound like legitimate news sources. These articles can spread quickly online, especially if they are shared by trusted individuals or groups D.
  5. Manipulated Images and Videos: AI can alter images and videos to create misleading content. For example, it can make it appear as though a candidate is involved in a scandal or has taken a controversial stance on an issue E.

In the 2024 U.S. election, AI is increasingly used to spread disinformation, leveraging advances in generative AI to amplify reach and sophistication. Key AI-driven tactics include creating deepfakes, generating misleading images, videos, and audio, and manipulating social media narratives. For example, deepfake technology has been used to impersonate public figures, creating false endorsements or advice that can confuse voters. These synthetic media products are not always clearly labeled, making it hard for people to distinguish authentic content from AI-generated forgeries.

AI also impacts election integrity through the automated spread of biased or incomplete information. Chatbots and AI-driven news sites can provide misleading or biased answers to political questions, furthering confusion. For instance, some platforms have responded inconsistently when asked for voting information, even in critical battleground states. AI systems also exploit people’s tendency for “cognitive heuristics”—mental shortcuts that make it easier to trust popular opinions or familiar-looking sources. This leads to faster sharing of unverified information, especially via mobile devices, where people tend to engage less critically with content.

In response, voters are encouraged to seek information from credible sources and apply a critical lens to content that appears sensational or heavily biased. Fact-checking and consulting multiple sources are crucial, especially since the authenticity of images and videos may not be immediately clear.