Thomas Metzinger on a moratorium on artificial sentience development
Listen now
Description
And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.  Thomas MetzingerShould we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges? Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence. Topics discussed in the episode: 0:00 introduction2:12 Defining consciousness and sentience9:55 What features might a sentient artificial intelligence have?17:11 Moratorium on artificial sentience development37:46 Case for a moratorium49:30 What would a moratorium look like?53:07 Social hallucination problem55:49 Incentives of politicians1:01:51 Incentives of tech companies1:07:18 Local vs global moratoriums1:11:52 Repealing the moratorium1:16:01 Information hazards1:22:21 Trends in thinking on artificial sentience over time1:39:38 What are the open problems in this field, and how might someone work on them with their career?Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast Support the show
More Episodes
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient...
Published 02/15/24
Published 02/15/24
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to...
Published 07/03/23