24 - Superalignment with Jan Leike
Listen now
Description
Recently, OpenAI made a splash by announcing a new "Superalignment" team. Lead by Jan Leike and Ilya Sutskever, the team would consist of top researchers, attempting to solve alignment for superintelligent AIs in four years by figuring out how to build a trustworthy human-level AI alignment researcher, and then using it to solve the rest of the problem. But what does this plan actually involve? In this episode, I talk to Jan Leike about the plan and the challenges it faces. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com/ Topics we discuss, and timestamps: 0:00:37 - The superalignment team 0:02:10 - What's a human-level automated alignment researcher? 0:06:59 - The gap between human-level automated alignment researchers and superintelligence 0:18:39 - What does it do? 0:24:13 - Recursive self-improvement 0:26:14 - How to make the AI AI alignment researcher 0:30:09 - Scalable oversight 0:44:38 - Searching for bad behaviors and internals 0:54:14 - Deliberately training misaligned models 1:02:34 - Four year deadline 1:07:06 - What if it takes longer? 1:11:38 - The superalignment team and... 1:11:38 - ... governance 1:14:37 - ... other OpenAI teams 1:18:17 - ... other labs 1:26:10 - Superalignment team logistics 1:29:17 - Generalization 1:43:44 - Complementary research 1:48:29 - Why is Jan optimistic? 1:58:32 - Long-term agency in LLMs? 2:02:44 - Do LLMs understand alignment? 2:06:01 - Following Jan's research The transcript: axrp.net/episode/2023/07/27/episode-24-superalignment-jan-leike.html Links for Jan and OpenAI: OpenAI jobs: openai.com/careers Jan's substack: aligned.substack.com Jan's twitter: twitter.com/janleike Links to research and other writings we discuss: Introducing Superalignment: openai.com/blog/introducing-superalignment Let's Verify Step by Step (process-based feedback on math): arxiv.org/abs/2305.20050 Planning for AGI and beyond: openai.com/blog/planning-for-agi-and-beyond Self-critiquing models for assisting human evaluators: arxiv.org/abs/2206.05802 An Interpretability Illusion for BERT: arxiv.org/abs/2104.07143 Language models can explain neurons in language models https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html Our approach to alignment research: openai.com/blog/our-approach-to-alignment-research Training language models to follow instructions with human feedback (aka the Instruct-GPT paper): arxiv.org/abs/2203.02155
More Episodes
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on...
Published 04/25/24