Back to all issues
🏛️Technology

AI, Deepfakes & the Crisis of Truth

How artificial intelligence is making it impossible to know what's real online—and what we can do about it

ProgressiveCommon GroundConservative

Areas of Common Ground

Despite partisan divides, most Americans agree on these key points:

  • It's becoming impossible to know what's real online
  • AI-generated misinformation poses serious threats to democracy
  • Social media companies should do more to combat fake content

+ 7 more areas of agreement below

What's the Challenge?

Artificial intelligence has created a crisis of trust in information. AI-generated images, videos, and text are now indistinguishable from reality. Deepfake technology can put anyone's face on anyone's body, make politicians say things they never said, and create entirely fabricated 'evidence' of events that never happened. Meanwhile, AI-powered bots flood social media with propaganda, and sophisticated algorithms manipulate what information you see. The result: it's increasingly difficult to know what's real and what's fake online. This threatens democracy itself—when citizens can't agree on basic facts, self-governance becomes impossible.

Where Most Americans Agree

It's becoming impossible to know what's real online

AI-generated misinformation poses serious threats to democracy

Social media companies should do more to combat fake content

Deepfake technology is dangerous and should have guardrails

Children are especially vulnerable to AI-generated manipulation

Foreign adversaries are using AI to spread propaganda and division

Journalists and fact-checkers face an impossible task against AI-generated content

The speed of AI advancement has outpaced our ability to regulate it

We need better tools to verify what's authentic

This problem will only get worse without action

Source: Pew Research Center 2024, AI Trust Survey

Current Perspectives from Both Sides

Understanding the full debate requires hearing what each side actually argues—not caricatures or strawmen.

Progressive Perspective

  • Tech companies prioritized profit over truth and enabled this crisis
  • AI is amplifying existing problems of disinformation and hate speech
  • Unregulated AI threatens marginalized communities who are targeted by deepfakes
  • We need strong government regulation of AI development and deployment
  • AI-generated misinformation undermines climate science and public health
  • Big Tech's AI tools are being used to manipulate elections and suppress votes

Conservative Perspective

  • Mainstream media already spreads misinformation—AI just makes it more obvious
  • Government regulation of AI will be used to censor conservative speech
  • Big Tech companies use 'fact-checking' to suppress conservative viewpoints
  • The real problem is lack of media literacy, not AI technology itself
  • Free market and technology innovation will solve these problems better than regulation
  • Government can't be trusted to determine what's true or false

These represent current talking points from each side of the political spectrum. Understanding both perspectives is essential for productive dialogue.

Evidence-Based Facts

95% of Americans have encountered misinformation online, with 63% saying they see it regularly

Source: Pew Research Center 2024

Deepfake videos increased by 900% from 2022 to 2024

Source: Deeptrace/Sensity AI Report

80% of Americans can't reliably distinguish AI-generated images from real photos

Source: MIT Media Lab Study 2024

Foreign influence operations using AI targeted the 2024 election from Russia, China, and Iran

Source: U.S. Intelligence Community Assessment

AI-generated scam calls and messages cost Americans over $10 billion in 2024

Source: Federal Trade Commission

Learn More

Questions for Thoughtful Debate

How do we regulate AI-generated content without enabling censorship?

Should AI-generated images and videos be required to carry watermarks or labels?

What responsibility do tech platforms have for AI-generated misinformation?

How can we teach people to be more skeptical of online content without promoting conspiracy thinking?

Should creating malicious deepfakes be a federal crime?

Can we develop technology to detect AI-generated content faster than AI can fool it?

What role should government play in determining what information is true or false?

How do we protect democracy when citizens can't agree on basic facts?

Discussion

Sign in to join the conversation