Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models
Listen now
Description
Yujia Huang (@YujiaHuangC) is a PhD student at Caltech, working at the intersection of deep learning and neuroscience.  She worked on optics and biophotonics before venturing into machine learning. Now, she hopes to design “less artificial” artificial intelligence. Her most recent paper at NeurIPS is Neural Networks with Recurrent Generative Feedback, introducing Convolutional Neural Networks with Feedback (CNN-F). Yujia is open to working with collaborators from many areas: neuroscience, signal processing, and control experts — in addition to those interested in generative models for classification. Feel free to reach out to her! Highlights from our conversation: 🏗 How recurrent generative feedback, a neuro-inspired design, improves adversarial robustness and and can be more efficient (less labels) 🧠 Adapting theories from neuroscience and classical research for machine learning 📊 What a new Turing test for “less artificial” or generalized AI could look like 💡 Tips for new machine learning researchers!
More Episodes
Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on...
Published 05/09/24
Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good...
Published 03/12/24
Published 03/12/24