Connor Leahy on Why Humanity Risks Extinction from AGI
Listen now
Description
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's the document we discuss in the episode:    https://www.thecompendium.ai   Timestamps:  00:00 The Compendium  15:25 The motivations of AGI corps   31:17 AI is grown, not written   52:59 A science of intelligence  01:07:50 Jobs, work, and AGI   01:23:19 Superintelligence   01:37:42 Open-source AI   01:45:07 What can we do?
More Episodes
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated...
Published 11/08/24
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI...
Published 10/25/24