Description
When New Hampshire voters picked up the phone earlier this year and heard what sounded like the voice of President Joe Biden asking them not to vote in that state’s primary election, the stage was set for an unprecedented election year. The call was a deepfake — and the first major instance of artificial intelligence being used in the 2024 election. With the rise of AI tools that can credibly synthesize voices, images and videos, how are voters supposed to determine what they can trust as they prepare to cast their votes?
To find out how lawmakers and civil society are pushing back against harmful false narratives and content, we talked with experts engaging the problem on several fronts. Stephen Richer, an elected Republican in Phoenix, posts on X (formerly Twitter) to engage misinformation head-on to protect Arizona voters. Adav Noti, the executive director of CLC, explains how good-governance advocates are hurrying to catch up with a profusion of new digital tools that make the age-old practices of misinformation and disinformation faster and cheaper than ever. And Mia Hoffman, a researcher who looks at the effects of AI on democracies, reminds voters not to panic — that bad information and malicious messaging don’t always have the power to reach their audience, let alone sway people’s opinions or actions.
An incumbent president drops out, mid-race. A former president becomes a party’s nominee for the first time in more than a century. There are multiple occurrences of political violence against a candidate. Newly emergent AI tools spread disinformation. And a Supreme Court that may be called upon...
Published 10/29/24
Imagine you’re at home when you hear a knock. At your door are people who want you to share, in detail, who you voted for in the last election, months ago. When you ask them who they are and where they’re from, they remain vague and perhaps even aggressive.
This was the case for some Americans...
Published 10/22/24