AI-generated or manipulated images are quickly becoming a lot more realistic. Soon, it may be impossible to tell the difference. That could create an opportunity for people to spread misinformation, and make it difficult to know what’s real. Tech companies like Adobe, Microsoft and Google, academics and government agencies are coming up with frameworks to verify images and, in some cases, show how they’ve been altered. But, these techniques may come with security risks of their own. WSJ’s Alex Ossola and Charlotte Gartenberg explore the new technology solutions that will identify fake images online and the potential issues getting them in front of users.
What do you think about the show? Let us know on Apple Podcasts or Spotify, or email us:
[email protected]
Further reading:
AI-Created Images Are So Good Even AI Has Trouble Spotting Some
Ask an AI Art Generator for Any Image. The Results Are Amazing—and Terrifying
Paparazzi Photos Were the Scourge of Celebrities. Now, It’s AI
AI, Art and the Future of Looking at a Painting
Some of the Thorniest Questions About AI Will Be Answered in Court
Learn more about your ad choices. Visit megaphone.fm/adchoices