Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Listen now
Description
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate. If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack. Timestamps (0:00:00) - TIME article (0:09:06) - Are humans aligned? (0:37:35) - Large language models (1:07:15) - Can AIs help with alignment? (1:30:17) - Society’s response to AI (1:44:42) - Predictions (or lack thereof) (1:56:55) - Being Eliezer (2:13:06) - Othogonality (2:35:00) - Could alignment be easier than we think? (3:02:15) - What will AIs want? (3:43:54) - Writing fiction & whether rationality helps you win Transcript TIME article Dwarkesh Patel 0:00:51 Today I have the pleasure of speaking with Eliezer Yudkowsky. Eliezer, thank you so much for coming out to the Lunar Society. Eliezer Yudkowsky 0:01:00 You’re welcome. Dwarkesh Patel 0:01:01 Yesterday, when we’re recording this, you had an article in Time calling for a moratorium on further AI training runs. My first question is — It’s probably not likely that governments are going to adopt some sort of treaty that restricts AI right now. So what was the goal with writing it? Eliezer Yudkowsky 0:01:25 I thought that this was something very unlikely for governments to adopt and then all of my friends kept on telling me — “No, no, actually, if you talk to anyone outside of the tech industry, they think maybe we shouldn’t do that.” And I was like — All right, then. I assumed that this concept had no popular support. Maybe I assumed incorrectly. It seems foolish and to lack dignity to not even try to say what ought to be done. There wasn’t a galaxy-brained purpose behind it. I think that over the last 22 years or so, we’ve seen a great lack of galaxy brained ideas playing out successfully. Dwarkesh Patel 0:02:05 Has anybody in the government reached out to you, not necessarily after the article but just in general, in a way that makes you think that they have the broad contours of the problem correct? Eliezer Yudkowsky 0:02:15 No. I’m going on reports that normal people are more willing than the people I’ve been previously talking to, to entertain calls that this is a bad idea and maybe you should just not do that. Dwarkesh Patel 0:02:30 That’s surprising to hear, because I would have assumed that the people in Silicon Valley who are weirdos would be more likely to find this sort of message. They could kind of rocket the whole idea that AI will make nanomachines that take over. It’s surprising to hear that normal people got the message first. Eliezer Yudkowsky 0:02:47 Well, I hesitate to use the term midwit but maybe this was all just a midwit thing. Dwarkesh Patel 0:02:54 All right. So my concern with either the 6 month moratorium or forever moratorium until we solve alignment is that at this point, it could make it seem to people like we’re crying wolf. And it would be like crying wolf because these systems aren’t yet at a point at which they’re dangerous.  Eliezer Yudkowsky 0:03:13 And nobody is saying they are. I’m not saying they are. The open letter signatories aren’t saying they are. Dwarkesh Patel 0:03:20 So if there is a point at which we can get the public momentum to do some sort of stop, wouldn’t
More Episodes
Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today. I did a bunch of socratic grilling throughout, but Francois’s arguments about why LLMs won’t lead to AGI are very interesting and worth thinking through. It was really fun...
Published 06/11/24
Published 06/11/24
Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness. Watch on YouTube. Listen on Apple...
Published 06/04/24