Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
Listen now
Description
In terms of the depth and range of topics, this episode is the best I’ve done. No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of. We ended up talking for 8 hours, so I'm splitting this episode into 2 parts. This part is about Carl’s model of an intelligence explosion, which integrates everything from: * how fast algorithmic progress & hardware improvements in AI are happening, * what primate evolution suggests about the scaling hypothesis, * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers, * how quickly robots produced from existing factories could take over the economy. We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer. The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff. Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. Timestamps (00:00:00) - Intro (00:01:32) - Intelligence Explosion (00:18:03) - Can AIs do AI research? (00:39:00) - Primate evolution (01:03:30) - Forecasting AI progress (01:34:20) - After human-level AGI (02:08:39) - AI takeover scenarios Get full access to The Lunar Society at www.dwarkeshpatel.com/subscribe
More Episodes
Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today. I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through. It was really fun...
Published 06/11/24
Published 06/11/24
Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness. Watch on YouTube. Listen on Apple...
Published 06/04/24