21 - Interpretability for Engineers with Stephen Casper
Listen now
Description
Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets. Patreon: patreon.com/axrpodcast Store: store.axrp.net Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 00:00:42 - Interpretability for engineers 00:00:42 - Why interpretability? 00:12:55 - Adversaries and interpretability 00:24:30 - Scaling interpretability 00:42:29 - Critiques of the AI safety interpretability community 00:56:10 - Deceptive alignment and interpretability 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery) 01:10:40 - Why Trojans? 01:14:53 - Which interpretability tools? 01:28:40 - Trojan generation 01:38:13 - Evaluation 01:46:07 - Interpretability for shaping policy 01:53:55 - Following Casper's work The transcript Links for Casper: Personal website Twitter Electronic mail: scasper [at] mit [dot] edu Research we discuss: The Engineer's Interpretability Sequence Benchmarking Interpretability Tools for Deep Neural Networks Adversarial Policies beat Superhuman Go AIs Adversarial Examples Are Not Bugs, They Are Features Planting Undetectable Backdoors in Machine Learning Models Softmax Linear Units Red-Teaming the Stable Diffusion Safety Filter Episode art by Hamish Doodles
More Episodes
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on...
Published 04/25/24