Description
If an AI system learned a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? That's the question that Evan and his coauthors at Anthropic sought to answer in their work on ""Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training"", which Evan will be discussing.
Evan Hubinger leads the new Alignment Stress-Testing team at Anthropic, which is tasked with red-teaming Anthropic's internal alignment techniques and evaluations. Prior to joining Anthropic, Evan was a Research Fellow at the Machine Intelligence Research Institute and worked on a variety of theoretical alignment work, including ""Risks from Learned Optimization in Advanced Machine Learning Systems"". Evan will be talking about the Anthropic Alignment Stress-Testing team's first paper, ""Sleeper Agents: Building Deceptive LLMs that Persist Through Safety Training"".
Watch on Youtube: https://www.youtube.com/watch?v=BgfT0AcosHw
According to one recent estimate, there are one sextillion animals on Earth that may be sentient, most living in the wild. Yet wild animal welfare is neglected by intergovernmental bodies such as the IPCC. This talk discusses the importance and difficulty of developing a framework for evaluating...
Published 10/24/24
Darren Margolias, Executive Director of ‪@BeastPhilanthropy‬, answers questions from EA Forum users, posted here: https://forum.effectivealtruism.org/posts/7QfKaF2bnCbuREJNx/ama-beast-philanthropy-s-darren-margolias/
Watch on Youtube: https://www.youtube.com/watch?v=0ylphNrBjWI
Published 10/24/24