Founder Eric Steinberger on Magic’s Counterintuitive Approach to Pursuing AGI
Listen now
Description
There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own. This is the story of wunderkind Eric Steinberger, the founder and CEO of Magic.dev. Eric came to programming through his obsession with AI and caught the attention of DeepMind researchers as a high school student. In 2022 he realized that AGI was closer than he had previously thought and started Magic to automate the software engineering necessary to get there. Among his counterintuitive ideas are the need to train proprietary large models, that value will not accrue in the application layer and that the best agents will manage themselves. Eric also talks about Magic’s recent 100M token context window model and the HashHop eval they’re open sourcing. Hosted by: Sonya Huang, Sequoia Capital Mentioned in this episode: David Silver: DeepMind researcher that led the AlphaGo team Johannes Heinrich: a PhD student of Silver’s and DeepMind researcher who mentored Eric as a highschooler Reinforcement Learning from Self-Play in Imperfect-Information Games: Johannes’s dissertation that inspired Eric  Noam Brown: DeepMind, Meta and now OpenAI reinforcement learning researcher who eventually collaborated with Eric and brought him to FAIR ClimateScience: NGO that Eric co-founded in 2019 while a university student  Noam Shazeer: One of the original Transformers researchers at Google and founder of Charater.ai  DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker: the first AI paper Eric ever tried to deeply understand LTM-2-mini: Magic’s first 100M token context model, build using the HashHop eval (now available open source) 00:00 - Introduction 01:39 - Vienna-born wunderkind 04:56 - Working with Noam Brown 8:00 - “I can do two things. I cannot do three.” 10:37 - AGI to-do list 13:27 - Advice for young researchers 20:35 - Reading every paper voraciously 23:06 - The army of Noams 26:46 - The leaps still needed in research 29:59 - What is Magic? 36:12 - Competing against the 800-pound gorillas 38:21 - Ideal team size for researchers 40:10 - AI that feels like a colleague 44:30 - Lightning round 47:50 - Bonus round: 200M token context announcement
More Episodes
Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each...
Published 10/29/24
Published 10/29/24
In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts. Roberts, co-author of...
Published 10/22/24