Description
Many scary stories about AI involve an AI system deceiving and subjugating humans in order to gain the ability to achieve its goals without us stopping it. This episode's guest, Alex Turner, will tell us about his research analyzing the notions of "attainable utility" and "power" that underly these stories, so that we can better evaluate how likely they are and how to prevent them.
Topics we discuss:
Side effects minimization Attainable Utility Preservation (AUP) AUP and alignment Power-seeking Power-seeking and alignment Future work and about Alex The transcript
Alex on the AI Alignment Forum
Alex's Google Scholar page
Conservative Agency via Attainable Utility Preservation
Optimal Policies Tend to Seek Power
Other works discussed:
Avoiding Side Effects by Considering Future Tasks The "Reframing Impact" Sequence The "Risks from Learned Optimization" Sequence Concrete Approval-Directed Agents Seeking Power is Convergently Instrumental in a Broad Class of Environments Formalizing Convergent Instrumental Goals The More Power at Stake, the Stronger Instumental Convergence Gets for Optimal Policies Problem Relaxation as a Tactic How I do Research Math that Clicks: Look for Two-way Correspondences Testing the Natural Abstraction Hypothesis