David Gunkel on robot rights
Listen now
Description
“Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.” David GunkelCan and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?  David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).   Topics discussed in the episode: Introduction (0:00)Why robot rights and not AI rights? (1:12)The other question: can and should robots have rights? (5:39)What is the case for robot rights? (10:21)What would robot rights look like? (19:50)What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)What will human-robot interaction look like in the future? (33:20)How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)Things we can learn from science fiction for human-robot interaction and robot rights (42:55)Can and should we do anything to encourage people to see robots in a more positive light? (47:55)Why David pursued philosophy of technology over computer science more generally (52:01)Does having technical expertise give you more credibility (54:01)Shifts in thinking about robots and AI David has noticed over his career (58:03)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast Support the show
More Episodes
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient...
Published 02/15/24
Published 02/15/24
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to...
Published 07/03/23