10 - AI's Future and Impacts with Katja Grace
Listen now
Description
When going about trying to ensure that AI does not cause an existential catastrophe, it's likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who's done work surveying AI researchers about when they expect superhuman AI to be reached, collecting data about how rapidly AI tends to progress, and thinking about the weak points in arguments that AI could be catastrophic for humanity. Topics we discuss: 00:00:34 - AI Impacts and its research 00:08:59 - How to forecast the future of AI 00:13:33 - Results of surveying AI researchers 00:30:41 - Work related to forecasting AI takeoff speeds 00:31:11 - How long it takes AI to cross the human skill range 00:42:47 - How often technologies have discontinuous progress 00:50:06 - Arguments for and against fast takeoff of AI 01:04:00 - Coherence arguments 01:12:15 - Arguments that AI might cause existential catastrophe, and counter-arguments 01:13:58 - The size of the super-human range of intelligence 01:17:22 - The dangers of agentic AI 01:25:45 - The difficulty of human-compatible goals 01:33:54 - The possibility of AI destroying everything 01:49:42 - The future of AI Impacts 01:52:17 - AI Impacts vs academia 02:00:25 - What AI x-risk researchers do wrong 02:01:43 - How to follow Katja's and AI Impacts' work The transcript "When Will AI Exceed Human Performance? Evidence from AI Experts" AI Impacts page of more complete survey results Likelihood of discontinuous progress around the development of AGI Discontinuous progress investigation The range of human intelligence
More Episodes
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I...
Published 11/26/23