8 - Assistance Games with Dylan Hadfield-Menell
Listen now
How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview. Link to the transcript Link to the paper "Cooperative Inverse Reinforcement Learning" Link to the paper "The Off-Switch Game" Link to the paper "Inverse Reward Design" Dylan's twitter account Link to apply to the MIT EECS graduate program Other work mentioned in the discussion: The original paper on inverse optimal control Justin Fu's research on, among other things, adversarial IRL Preferences implicit in the state of the world What are you optimizing for? Aligning recommender systems with human values The Assistive Multi-Armed Bandit Soares et al. on Corrigibility Should Robots be Obedient? Rodney Brooks on the Seven Deadly Sins of Predicting the Future of AI Products in category theory AXRP Episode 7 - Side Effects with Victoria Krakovna Attainable Utility Preservation Penalizing side effects using stepwise relative reachability Simplifying Reward Design through Divide-and-Conquer Active Inverse Reward Design An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning Incomplete Contracting and AI Alignment Multi-Principal Assistance Games Consequences of Misaligned AI
More Episodes
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I...
Published 11/26/23