23 - Mechanistic Anomaly Detection with Mark Xu
Listen now
Description
Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous turns. We both talk about the core problems of relating these mechanistic anomalies to bad behaviour, as well as the paper "Formalizing the presumption of independence", which formulates the problem of formalizing heuristic mathematical reasoning, in the hope that this will let us mathematically define "mechanistic anomalies". Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com/ Topics we discuss, and timestamps: 0:00:38 - Mechanistic anomaly detection 0:09:28 - Are all bad things mechanistic anomalies, and vice versa? 0:18:12 - Are responses to novel situations mechanistic anomalies? 0:39:19 - Formalizing "for the normal reason, for any reason" 1:05:22 - How useful is mechanistic anomaly detection? 1:12:38 - Formalizing the Presumption of Independence 1:20:05 - Heuristic arguments in physics 1:27:48 - Difficult domains for heuristic arguments 1:33:37 - Why not maximum entropy? 1:44:39 - Adversarial robustness for heuristic arguments 1:54:05 - Other approaches to defining mechanisms 1:57:20 - The research plan: progress and next steps 2:04:13 - Following ARC's research The transcript: axrp.net/episode/2023/07/24/episode-23-mechanistic-anomaly-detection-mark-xu.html ARC links: Website: alignment.org Theory blog: alignment.org/blog Hiring page: alignment.org/hiring Research we discuss: Formalizing the presumption of independence: arxiv.org/abs/2211.06738 Eliciting Latent Knowledge (aka ELK): alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge Mechanistic Anomaly Detection and ELK: alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk Can we efficiently explain model behaviours? alignmentforum.org/posts/dQvxMZkfgqGitWdkb/can-we-efficiently-explain-model-behaviors Can we efficiently distinguish different mechanisms? alignmentforum.org/posts/JLyWP2Y9LAruR2gi9/can-we-efficiently-distinguish-different-mechanisms
More Episodes
Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk...
Published 06/12/24
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Patreon: patreon.com/axrpodcast Ko-fi:...
Published 05/30/24