22 - Shard Theory with Quintin Pope
Listen now
Description
What can we learn about advanced deep learning systems by understanding how humans learn and form values over their lifetimes? Will superhuman AI look like ruthless coherent utility optimization, or more like a mishmash of contextually activated desires? This episode's guest, Quintin Pope, has been thinking about these questions as a leading researcher in the shard theory community. We talk about what shard theory is, what it says about humans and neural networks, and what the implications are for making AI safe. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles Topics we discuss, and timestamps: 0:00:42 - Why understand human value formation? 0:19:59 - Why not design methods to align to arbitrary values? 0:27:22 - Postulates about human brains 0:36:20 - Sufficiency of the postulates 0:44:55 - Reinforcement learning as conditional sampling 0:48:05 - Compatibility with genetically-influenced behaviour 1:03:06 - Why deep learning is basically what the brain does 1:25:17 - Shard theory 1:38:49 - Shard theory vs expected utility optimizers 1:54:45 - What shard theory says about human values 2:05:47 - Does shard theory mean we're doomed? 2:18:54 - Will nice behaviour generalize? 2:33:48 - Does alignment generalize farther than capabilities? 2:42:03 - Are we at the end of machine learning history? 2:53:09 - Shard theory predictions 2:59:47 - The shard theory research community 3:13:45 - Why do shard theorists not work on replicating human childhoods? 3:25:53 - Following shardy research The transcript Shard theorist links: Quintin's LessWrong profile Alex Turner's LessWrong profile Shard theory Discord EleutherAI Discord Research we discuss: The Shard Theory Sequence Pretraining Language Models with Human Preferences Inner alignment in salt-starved rats Intro to Brain-like AGI Safety Sequence Brains and transformers: The neural architecture of language: Integrative modeling converges on predictive processing Brains and algorithms partially converge in natural language processing Evidence of a predictive coding hierarchy in the human brain listening to speech Singular learning theory explainer: Neural networks generalize because of this one weird trick Singular learning theory links Implicit Regularization via Neural Feature Alignment, aka circles in the parameter-function map The shard theory of human values Predicting inductive biases of pre-trained networks Understanding and controlling a maze-solving policy network, aka the cheese vector Quintin's Research agenda: Supervising AIs improving AIs Steering GPT-2-XL by adding an activation vector Links for the addendum on mesa-optimization skepticism: Quintin's response to Yudkowsky arguing against AIs being steerable by gradient descent Quintin on why evolution is not like AI training Evolution provides no evidence for the sharp left turn Let's Agree to Agree: Neural Networks Share Classification Order on Real Datasets
More Episodes
How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them?...
Published 08/24/24
How can we figure out if AIs are capable enough to pose a threat to humans? When should we make a big effort to mitigate risks of catastrophic AI misbehaviour? In this episode, I chat with Beth Barnes, founder of and head of research at METR, about these questions and more. Patreon:...
Published 07/28/24