The 4 Cs of Superintelligence
Listen now
Description
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two. Topics addressed in this episode include: *) Reasons why superintelligence might never be created *) Timelines for the arrival of superintelligence have been compressed *) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance? *) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring? *) The flaws in the "Level zero futurist" position *) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there *) A startling illustration of the dramatic power of exponential growth *) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks *) Why the "Cease" option is looking more credible nowadays than it did a few years ago *) Might "Cease" become a "Plan B" option? *) Examples of political dictators who turned away from acquiring or using various highly risky weapons *) Challenges facing a "Turing Police" who monitor for dangerous AI developments *) If a superintelligence has agency (volition), it seems that "Control" is impossible *) Ideas for designing superintelligence without agency or volition *) Complications with emergent sub-goals (convergent instrumental goals) *) A badly configured superintelligent coffee fetcher *) Bad actors may add agency to a superintelligence, thinking it will boost its performance *) The possibility of changing social incentives to reduce the dangers of people becoming bad actors *) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever *) Human civilisations contain many diametrically opposed goals *) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values? *) A cliff-hanger ending The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/ Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration The Imposter Syndrome Network PodcastFun conversations about technology careers that inform and inspire. =) Listen on: Apple Podcasts   Spotify
More Episodes
Our guest in this episode is Dr. Mark Kotter. Mark is a neurosurgeon, stem cell biologist, and founder or co-founder of three biotech start-up companies that have collectively raised hundreds of millions of pounds: bit.bio, clock.bio, and Meatable.In addition, Mark still conducts neurosurgeries...
Published 05/27/24
Published 05/27/24
The public discussion in a number of countries around the world expresses worries about what is called an aging society. These countries anticipate a future with fewer younger people who are active members of the economy, and a growing number of older people who need to be supported by the people...
Published 05/16/24