Imagine A World: What if global challenges led to more centralization?
Listen now
Description
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'. How does a team of seven academics agree on one cohesive imagined world? That's a question the team behind 'Core Central', a second-place prizewinner in the FLI Worldbuilding Contest, had to figure out as they went along. In the end, this entry's realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralised AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work? In this third episode of 'Imagine a World',​ Guillaume Riesen speaks to two of the academics in this team, John Burden and Henry Shevlin, representing the team that created 'Core Central'. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards alongside John) work with the Centre for the Study of Existential Risk, also at Cambridge University. Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this imagined world: https://worldbuild.ai/core-central The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and Concepts referenced in the episode: https://en.wikipedia.org/wiki/Culture_series https://en.wikipedia.org/wiki/The_Expanse_(TV_series) https://www.vox.com/authors/kelsey-piper https://en.wikipedia.org/wiki/Gratitude_journal https://en.wikipedia.org/wiki/The_Diamond_Age https://www.scientificamerican.com/article/the-mind-of-an-octopus/ https://en.wikipedia.org/wiki/Global_workspace_theory https://en.wikipedia.org/wiki/Alien_hand_syndrome https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)
More Episodes
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's...
Published 11/22/24
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated...
Published 11/08/24