“39 - Evan Hubinger on Model Organisms of Misalignment” by DanielFilan
Listen now
Description
YouTube link The ‘model organisms of misalignment’ line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: “Sleeper Agents” and “Sycophancy to Subterfuge”. Topics we discuss: Model organisms and stress-testing Sleeper Agents Do ‘sleeper agents’ properly model deceptive alignment? Surprising results in “Sleeper Agents” Sycophancy to Subterfuge How models generalize from sycophancy to subterfuge Is the reward editing task valid? Training away sycophancy and subterfuge Model organisms, AI control, and evaluations Other model organisms research Alignment stress-testing at Anthropic Following Evan's work Daniel Filan: Hello, everybody. In this episode, I’ll be speaking with Evan Hubinger. [...] --- Outline: (01:46) Model organisms and stress-testing (09:02) Sleeper Agents (25:18) Do ‘sleeper agents’ properly model deceptive alignment? (42:08) Surprising results in “Sleeper Agents” (01:02:51) Sycophancy to Subterfuge (01:15:27) How models generalize from sycophancy to subterfuge (01:23:27) Is the reward editing task valid? (01:28:53) Training away sycophancy and subterfuge (01:36:42) Model organisms, AI control, and evaluations (01:41:12) Other model organisms research (01:43:11) Alignment stress-testing at Anthropic (01:51:07) Following Evan's work --- First published: December 1st, 2024 Source: https://www.lesswrong.com/posts/sookiqxkzzLmPYB3r/39-evan-hubinger-on-model-organisms-of-misalignment --- Narrated by TYPE III AUDIO.
More Episodes
Audio note: this article contains 449 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Based off research performed in the MATS 5.1 extension program, under the mentorship of Alex Turner (TurnTrout). Research...
Published 12/04/24
Preface Several friends have asked me about what psychological effects I think could affect human judgement about x-risk. This isn't a complete answer, but in 2018 I wrote a draft of "AI Research Considerations for Human Existential Safety" (ARCHES) that included an overview of cognitive biases...
Published 12/04/24
In the spirit of the season, you can book a call with me to help w/ your interp project (no large coding though) Would you like someone to: Review your paper or code? Brainstorm ideas on next steps? How to best communicate your results? Discuss conceptual problems Obvious Advice (e.g. being...
Published 12/03/24