Kurt Gray on human-robot interaction and mind perception
Listen now
Description
“And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.” Kurt GrayWhat is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions? Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides. Topics discussed in the episode: Introduction (0:00)How did a geophysicist come to be doing social psychology? (0:51)What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)What is mind perception? (4:45)What is a mind? (7:45)Agency vs experience, or thinking vs feeling (9:40)Why do people see moral exemplars as being insensitive to pain? (10:45)How will people perceive minds in robots/AI? (18:50)Perspective taking as a tool to reduce substratism towards AI (29:30)Why don’t people like using AI to make moral decisions? (32:25)What would be the moral status of AI if they are not sentient? (38:00)The presence of robots can make people seem more similar (44:10)What can we expect about discrimination towards digital minds in the future? (48:30)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast Support the show
More Episodes
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient...
Published 02/15/24
Published 02/15/24
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to...
Published 07/03/23