The Likelihood and Risks of Superintelligent Machines
Listen now
Description
Kurt Andersen speaks with computer scientist Stuart Russell about the risks of machines reaching superintelligence and advancing beyond human control.  In order to  avoid this, Russel believes, we need to start over with AI and build machines that are uncertain about what humans want. STUART RUSSELL is a computer scientist and professor at University of California Berkeley. He is the author, most recently, of Human Compatible: Artificial Intelligence and the Problem of Control. He has served as the Vice-Chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is the author (with Peter Norvig) of the universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach.   A transcript of this episode is available at Aventine.org. Learn more about your ad choices. Visit podcastchoices.com/adchoices
More Episodes
The Paris Climate Agreement says we need to reach “net zero” carbon emissions by 2050. That means for every new carbon molecule we put in the air, we have to take one out. Even the most optimistic forecasts still anticipate burning fossil fuels well past that date. So how do we balance the carbon...
Published 08/13/24
Hydrogen has long been the great hope of the environmental movement. Hydrogen-powered cars; airplanes; even home heating. A single molecule could power it all. Much of that has gone nowhere. But now, hydrogen is being touted as the answer to carbon-free steel. Can we trust in our hydrogen future...
Published 08/06/24