黑料不打烊

New survey finds most Americans expect AI abuses will affect 2024 election

Additional findings from the survey by the Imagining the Digital Future Center and the 黑料不打烊 Poll were that many are not confident they can detect faked photos, videos or audio.

Seventy-eight percent of American adults expect abuses of artificial intelligence systems (AIs) that will affect the outcome of the 2024 presidential election, according to a new national survey by the 黑料不打烊 Poll and the at 黑料不打烊.

The survey finds:

  • 73% of Americans believe it is 鈥渧ery鈥 or 鈥渟omewhat鈥 likely AI will be used to manipulate social media to influence the outcome of the presidential election 鈥 for example, by generating information from fake accounts or bots or distorting people鈥檚 impressions of the campaign.
  • 70% say it is likely the election will be affected by the use of AI to generate fake information, video and audio material.
  • 62% say the election is likely to be affected by the targeted use of AI to convince some voters not to vote.
  • In all, 78% say at least one of these abuses of AI will affect the presidential election outcome. More than half think all three abuses are at least somewhat likely to occur.

The Imagining the Digital Future Center works to discover and broadly share a diverse range of opinions, ideas and original research about the impact of digital change. This survey was done in collaboration with the 黑料不打烊 Poll, which conducts statewide and national surveys on issues of importance to voters throughout the nation and in North Carolina. for more details on the findings as well as the survey methodology.

A chart displaying results of the 黑料不打烊 Poll

These concerns about AI and the election put Americans in a punishing frame of mind. Fully 93% think some penalty should be applied to candidates who maliciously and intentionally alter or fake photos, videos or audio files.

  • 46% think the punishment should be removal from office.
  • 36% say offenders should face criminal prosecution.

By a nearly 8-1 margin, more Americans feel the use of AI will hurt the coming election than help it: 39% say it will hurt and 5% think it will help. Some 37% say they are not sure.

Americans鈥 concerns about the use and impact of AI systems occur in a challenging and confusing news and information environment.

  • 52% are not confident they can detect altered or faked audio material.
  • 47% are not confident they can detect altered videos.
  • 45% say they are not confident they can detect faked photos.

They have far less faith in the capacity of others to detect fakes: About 7-in-10 are not confident in most voters鈥 ability to detect photos, videos and audio that have been altered or faked.

A chart displaying results of the 黑料不打烊 Poll

鈥淰oters think this election will unfold in an extraordinarily challenging news and information environment,鈥 said Lee Rainie, Director of 黑料不打烊鈥檚 Imagining the Digital Future Center. 鈥淭hey anticipate that new kinds of misinformation, faked material and voter-manipulation tactics are being enabled by AI. What鈥檚 worse, many aren鈥檛 sure they can sort through the garbage they know will be polluting campaign-related content.鈥

This survey finds that 23% of U.S. adults have used large language models (LLMs) or chatbots like ChatGPT, Gemini, or Claude. And it explores one of the main concerns related to these models: Are they mostly fair or mostly biased when they answer users鈥 questions related to politics and public policy-related issues?

Majorities of Democrats, Republicans and independents say they are not sure if these systems are fair or biased to different groups. At the same time, there are some notable differences among partisans who do have views. Republicans are generally more wary of AI models than Democrats are. For instance, Republicans are more likely to think AI systems are biased against their views. Surprisingly, they are also more likely than Democrats to think that AI systems are biased against the views of Democrats.

Here are the findings:

  • 36% of Republicans think AI systems are biased against Republicans, compared with 15% of Democrats who think that.
  • 23% of Republicans also (or 鈥減aradoxically鈥) think AI systems are biased against Democrats, compared with 14% of Democrats who think that.

In addition, Republicans are more likely than Democrats to think AI systems are biased against men and White people. Democrats are somewhat less likely than Republicans to think those biases exist in AI systems.

Some 60% of Americans say they are 鈥渧ery鈥 or 鈥渟omewhat鈥 confident people鈥檚 votes will be accurately cast and counted. On this question, 83% of Democrats are confident, while 60% of Republicans are not confident.

A chart displaying results of the 黑料不打烊 Poll

鈥淢isinformation in elections has been around since before the invention of computers, but many worry about the sophistication of AI technology in 2024 giving bad actors an accessible tool to spread misinformation at an unprecedented scale,鈥 said Jason Husser, professor of political science and director of the 黑料不打烊 Poll. 鈥淲e know that most voters are aware of AI risks to the 2024 election. However, the behavioral implications of that awareness will remain unclear until we see the aftermath of AI-generated misinformation. An optimistic hope is that risk-aware voters may approach information in the 2024 cycle with heightened caution, leading them to become more sophisticated consumers of political information. A pessimistic outlook is that worries about AI-misinformation might translate into diminished feelings of self-efficacy, institutional trust and civic engagement.鈥

The survey reported here was conducted from April 19-21, 2024, among 1,020 U.S. adults. It has a margin of error of 3.2 percentage points. Read the for more details.