Is this your podcast?
Sign up to track ranks and reviews from Spotify, Apple Podcasts and more
Daniel Filan
AXRP
the AI X-risk Research Podcast
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Listen now
Ratings & Reviews
4.6 stars from 5 ratings
It’s early days, but if this keeps up, I think it’s safe to say that this is my new favourite podcast. I’ve been interested in ai risk for quite a while, but only started getting into machine learning from a hands on technical perspective in the past few months. This podcast is proving to be a...Read full review »
Jichah via Apple Podcasts · Germany · 12/26/20
Recent Episodes
Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk...
Published 06/12/24
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Patreon: patreon.com/axrpodcast Ko-fi:...
Published 05/30/24
Do you host a podcast?
Track your ranks and reviews from Spotify, Apple Podcasts and more.
See hourly chart positions and more than 30 days of history.
Get Chartable Analytics »