Description
This talk examines whether advanced AIs that perform well in training will be doing so in order to gain power later — a behavior Joe Carlsmith calls "scheming" (also often called "deceptive alignment"). This talk gives an overview of his recent report on the topic, available on arXiv here: https://arxiv.org/abs/2311.08379.
Joe Carlsmith is a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and he has a doctorate in philosophy from the University of Oxford.
Watch on Youtube: https://www.youtube.com/watch?v=AxUTiGS6BHM
According to one recent estimate, there are one sextillion animals on Earth that may be sentient, most living in the wild. Yet wild animal welfare is neglected by intergovernmental bodies such as the IPCC. This talk discusses the importance and difficulty of developing a framework for evaluating...
Published 10/24/24
Darren Margolias, Executive Director of @BeastPhilanthropy, answers questions from EA Forum users, posted here: https://forum.effectivealtruism.org/posts/7QfKaF2bnCbuREJNx/ama-beast-philanthropy-s-darren-margolias/
Watch on Youtube: https://www.youtube.com/watch?v=0ylphNrBjWI
Published 10/24/24