Catastrophe and consent
Listen now
Description
In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns? The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence. Topics addressed in this episode include: *) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence *) The existential threat of the entire human species being wiped out *) The vulnerabilities of our shared infrastructure *) An AGI may pursue goals even without it being conscious or having agency *) The risks of accidental and/or coincidental catastrophe *) A single technical fault caused the failure of automated passport checking throughout the UK *) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia *) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles *) Reasons why an AGI might decide to eliminate humans *) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders *) Why "Consent" is a better name than "Celebration" *) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems *) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent" *) Enhanced human intelligence could play a role in avoiding a surge of panic *) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans *) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us *) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans *) Is collaboration a self-evident virtue? *) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring? *) The inscrutability of AGI motivation *) A reason to consider "Consent" as the most likely outcome *) A fifth 'C' word, as discussed by Max Tegmark *) A reason to keep working on a moonshot solution for "Control" *) Practical steps to reduce the risk of public panic Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
More Episodes
Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future...
Published 06/13/24
Published 06/13/24
Our guest in this episode is Max More. Max is a philosopher, a futurist, and a transhumanist - a term which he coined in 1990, the same year that he legally changed his name from O’Connor to More.One of the tenets of transhumanism is that technology will allow us to prevent and reverse the aging...
Published 06/05/24