Is this your podcast?
Sign up to track ranks and reviews from Spotify, Apple Podcasts and more
BlueDot Impact
AI Safety Fundamentals
Alignment
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment
Listen now
Recent Episodes
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises...
Published 07/19/24
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises...
Published 07/19/24
Do you host a podcast?
Track your ranks and reviews from Spotify, Apple Podcasts and more.