Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up
Listen now
Description
Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum. We discuss: What is AI Impacts working on? Counterarguments to the basic AI x-risk case Reasons to doubt that superhuman AI systems will be strongly goal-directed Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights Aren't deep learning systems fairly good at understanding our 'true' intentions? Reasons to doubt that (misaligned) superhuman AI would overpower humanity The case for slowing down AI Is AI really an arms race? Are there examples from history of valuable technologies being limited or slowed down? What does Katja think about the recent open letter on pausing giant AI experiments? Why read George Saunders? Key links: World Spirit Sock Puppet (Katja's main blog) Counterarguments to the basic AI x-risk case Let's think about slowing down AI We don't trade with ants Thank You, Esther Forbes (George Saunders) You can see more links and a full transcript at hearthisidea.com/episodes/grace.
More Episodes
Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in...
Published 03/16/24
Published 03/16/24
Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here. We talk about: The possibility...
Published 02/04/24