Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Listen now
Description
I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about: * time to AGI * leaks and spies * what's after generative models * post AGI futures * working with Microsoft and competing with Google * difficulty of aligning superhuman AI Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes. As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate. If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack. Timestamps (00:00) - Time to AGI (05:57) - What’s after generative models? (10:57) - Data, models, and research (15:27) - Alignment (20:53) - Post AGI Future (26:56) - New ideas are overrated (36:22) - Is progress inevitable? (41:27) - Future Breakthroughs Transcript Time to AGI Dwarkesh Patel   Today I have the pleasure of interviewing Ilya Sutskever, who is the Co-founder and Chief Scientist of OpenAI. Ilya, welcome to The Lunar Society. Ilya Sutskever   Thank you, happy to be here. Dwarkesh Patel   First question and no humility allowed. There are not that many scientists who will make a big breakthrough in their field, there are far fewer scientists who will make multiple independent breakthroughs that define their field throughout their career, what is the difference? What distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in your field? Ilya Sutskever   Thank you for the kind words. It's hard to answer that question. I try really hard, I give it everything I've got and that has worked so far. I think that's all there is to it.  Dwarkesh Patel   Got it. What's the explanation for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers? Ilya Sutskever   Maybe they haven't really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. I can certainly imagine they would be taking some of the open source models and trying to use them for that purpose. For sure I would expect this to be something they'd be interested in the future. Dwarkesh Patel   It's technically possible they just haven't thought about it enough? Ilya Sutskever   Or haven't done it at scale using their technology. Or maybe it is happening, which is annoying.  Dwarkesh Patel   Would you be able to track it if it was happening?  Ilya Sutskever  I think large-scale tracking is possible, yes. It requires special operations but it's possible. Dwarkesh Patel   Now there's some window in which AI is very economically valuable, let’s say on the scale of airplanes, but we haven't reached AGI yet. How big is that window? Ilya Sutskever   It's hard to give a precise answer and it’s definitely going to be a good multi-year window. It's also a question of definition. Because AI, before it becomes AGI, is going to be increasingly more valuable year after year in an exponential way.  In hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already, last year, there has been a fair amount of economic value produced by AI. Next year is going to be larger and larger after that. So I think it's going to be a good multi-year chunk of time where that’s going to be true, from now till AGI pretty much.  Dwarkesh Patel   Okay. Because I'm curious if there's a startup that's using your model, at some point if you have AGI there's only one business in the world, it's OpenAI. How much window does any business have where they're actually producing something that AGI can’t produce? Ilya Sutskever   It's the same question as a
More Episodes
Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come... Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for...
Published 05/15/24
Published 05/15/24
Mark Zuckerberg on: - Llama 3 - open sourcing towards AGI - custom silicon, synthetic data, & energy constraints on scaling - Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more Enjoy! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast...
Published 04/18/24