Why AI Should Be Taught to Know Its Limits
Listen now
Description
One of AI’s biggest, unsolved problems is what the advanced algorithms should do when they confront a situation they don’t have an answer for. For programs like Chat GPT, that could mean providing a confidently wrong answer, what’s often called a “hallucination”; for others, as with self-driving cars, there could be much more serious consequences. But what if AIs could be taught to recognize what they don’t understand and adjust accordingly? Usama Fayyad, the executive director for the Institute for Experiential Artificial Intelligence at Northeastern University thinks this could be the algorithmic answer to making future AIs better at what they do, by doing something too few humans can: recognizing their own limits. What do you think about the show? Let us know on Apple Podcasts or Spotify, or email us: [email protected]  Further reading: How Did Companies Use Generative AI in 2023? Here’s a Look at Five Early Adopters.  Your Medical Devices Are Getting Smarter. Can the FDA Keep Them Safe?  Artificial: The OpenAI Story  Learn more about your ad choices. Visit megaphone.fm/adchoices
More Episodes
Hollywood studios are making big bets that artificial-intelligence models could help make movie magic cheaper than ever, including in the visual effects industry. And after Lions Gate Entertainment announced a new partnership with Runway to develop new tools trained on its catalog, AI may be even...
Published 11/15/24
Videogame cartridges and discs have mostly been replaced by downloads. Now, some console makers like Microsoft want to move videogames into the cloud-streaming business. Joost van Dreunen, an industry analyst and CEO of market research firm Aldora, joins WSJ’s Danny Lewis to talk about the new...
Published 11/08/24