Episodes
Vincent Sitzmann (Google Scholar) (Website) is a postdoc at MIT. His work is on neural scene representations in computer vision.  Ultimately, he wants to make representations that AI agents can use to solve the same visual tasks humans solve regularly, but that are currently impossible for AI. **Highlights from our conversation:** 👁 “Vision is about the question of building representations” 🧠 “We (humans) likely have a 3D inductive bias” 🤖 “All computer vision should be 3D computer...
Published 05/20/21
Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general.  This is known as the value alignment problem. Highlights from our conversation: 👨‍👩‍👧‍👦 How to align AI to human values 📉 Consequences of misaligned AI -> bias & misdirected optimization 📱 Better AI recommender...
Published 05/12/21
Drew Linsley (Google Scholar) (Website) is a Paul J. Salem senior research associate at Brown, advised by Thomas Serre. He is working on building computational models of the visual system that serve the dual purpose of (1) explaining biological function and (2) extending artificial vision. Prior to his work in the Serre lab, he completed a PhD in computational neuroscience at Boston College and a BA in Psychology at Hamilton College. His most recent paper at NeurIPS is Stable and expressive...
Published 04/02/21
Giancarlo Kerg (Google Scholar) is a PhD student at Mila, supervised by Yoshua Bengio and Guillaume Lajoie.  He is working on out-of-distribution generalization and modularity in memory-augmented neural networks.  Prior to his PhD, he studied pure mathematics at Cambridge and Université Libre de Bruxelles. His most recent paper at NeurIPS is Untangling tradeoffs between recurrence and self-attention in neural networks.  It presents a proof for how self-attention mitigates the gradient...
Published 03/27/21
Yujia Huang (@YujiaHuangC) is a PhD student at Caltech, working at the intersection of deep learning and neuroscience.  She worked on optics and biophotonics before venturing into machine learning. Now, she hopes to design “less artificial” artificial intelligence. Her most recent paper at NeurIPS is Neural Networks with Recurrent Generative Feedback, introducing Convolutional Neural Networks with Feedback (CNN-F). Yujia is open to working with collaborators from many areas: neuroscience,...
Published 03/18/21
Our next guest, Julian Chibane, is a PhD student at the Real Virtual Humans group at the Max Planck Institute for Informatics in Germany. His recent work centers around intrinsic functions for 3D reconstruction, and his most recent paper at NeurIPS is Neural Unsigned Distance Fields for Implicit Function Learning. He also introduced Implicit Feature Networks (IF-Nets) in Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion. Highlights 🖼 How, surprisingly, the...
Published 03/05/21
Katja Schwartz came to machine learning from physics, and is now working on 3D geometric scene understanding at the Max Planck Institute for Intelligent Systems. Her most recent work, “Generative Radiance Fields for 3D-Aware Image Synthesis,” revealed that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity. We discuss the ideas in Katja’s work and more: 🥦 the role 3D generation plays in conceptual...
Published 02/24/21
Joel Lehman was previously a founding member at Uber AI Labs and assistant professor at the IT University of Copenhagen. He's now a research scientist at OpenAI, where he focuses on open-endedness, reinforcement learning, and AI safety. Joel’s PhD dissertation introduced the novelty search algorithm. That work inspired him to write the popular science book, “Why Greatness Cannot Be Planned”, with his PhD advisor Ken Stanley, which discusses what evolutionary algorithms imply for how...
Published 02/17/21
Cinjon Resnick was formerly from Google Brain and now doing his PhD at NYU. We talk about why he believes scene understanding is critical to out of distribution generalization, and how his theses have evolved since he started his PhD. Some topics we over: How Cinjon started his research by trying to grow a baby through language and games, before running into a wall with this approach How spending time at circuses 🎪 and with gymnasts 🤸🏽‍♂️ re-invigorated his research, and convinced him to...
Published 02/01/21
Sarah Jane Hong is the co-founder of Latent Space, a startup building the first fully AI-rendered 3D engine in order to democratize creativity. We touch on what it was like taking classes under Geoff Hinton in 2013, the trouble with using natural language prompts to render a scene, why a model’s ability to scale is more important than getting state-of-the-art results, and more.
Published 01/07/21
We interview Kelvin Guu, a researcher at Google AI and the creator of REALM.  The conversation is a wide-ranging tour of language models, how computers interact with world knowledge, and much more.
Published 12/15/20