Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
Listen now
Description
Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks. **Highlights from our conversation:**  🕸  "Why is sparsity everywhere? This isn't an accident." 🤖  "If I gave you 500 GPUs, could you actually keep those GPUs busy?" 📊  "In general, I think we have a crisis of science in ML."
More Episodes
Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on...
Published 05/09/24
Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good...
Published 03/12/24
Published 03/12/24