23 episodes

Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.

The Sentience Institute Podcast Sentience Institute

    • Science
    • 5.0 • 11 Ratings

Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.

    Eric Schwitzgebel on user perception of the moral status of AI

    Eric Schwitzgebel on user perception of the moral status of AI

    “I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.”
    Eric SchwitzgebelWhy should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?
    Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology.  His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World.  He blogs at The Splintered Mind.


    Topics discussed in the episode:
    Introduction (0:00)AI systems must not confuse users about their sentience or moral status introduction (3:14)Not confusing experts (5:30)Not confusing general users (9:12)What would make an AI system sentience or moral standing clear? (13:21)Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)How would we implement this solution at a policy level? (25:19)What happens when some theories of consciousness disagree about AI consciousness? (28:24)How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)What was the process for determining what indicator properties to include? (42:58)Advantages of the indicator properties approach (44:49)Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)Where does Eric think we will see sentience first in AI if we do? (50:17)Are things like grounding or embodiment essential for understanding and consciousness? (53:35)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 57 min
    Raphaël Millière on large language models

    Raphaël Millière on large language models

    “Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.”
    Raphaël MillièreHow do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language? 
    Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.
    Topics discussed in the episode:
    Introduction (0:00)How Raphaël came to work on AI (1:25)How do large language models work? (5:50)Deflationary and inflationary claims about large language models (19:25)The dangers of overclaiming and underclaiming (25:20)Summary of cognitive capacities large language models might have (33:20)Intelligence (38:10)Artificial general intelligence (53:30)Consciousness and sentience (1:06:10)Theory of mind (01:18:09)Compositionality (1:24:15)Language understanding and referential grounding (1:30:45)Which cognitive capacities are most useful to understand for various purposes? (1:41:10)Conclusion (1:47:23)
    Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 1 hr 49 min
    Matti Wilks on human-animal interaction and moral circle expansion

    Matti Wilks on human-animal interaction and moral circle expansion

    “Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.”
    Matti WilksWhat does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?
    Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.
    Topics discussed in the episode:
    Introduction (0:00)What matters ethically? (1:00)The link between animals and digital minds (3:10)Higher vs lower orders of pleasure/suffering (4:15)Psychology of human-animal interaction and what that means for human-robot interaction (5:40)Is speciesism socially learned? (10:15)Implications for animal advocacy strategy (19:40)Moral expansiveness scale and the moral circle (23:50)Does expanding the moral circle dilute it? (27:40)Predictors for attitudes towards species and artificial sentience (30:05)Correlation between naturalness and acceptableness (38:30)What does our understanding of naturalness and acceptableness imply for attitudes towards cultured meat? (49:00)How can we counter concerns about naturalness in cultured meat? (52:00)What does our understanding of attitudes towards naturalness imply for artificial sentience? (54:00)Interventions for moral circle expansion and spillover from and to animal advocacy (56:30)Academic field building as a strategy for developing a cause area (1:00:50)

    Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 1 hr 6 min
    David Gunkel on robot rights

    David Gunkel on robot rights

    “Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.”
    David GunkelCan and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?
     David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA). 
     Topics discussed in the episode:
    Introduction (0:00)Why robot rights and not AI rights? (1:12)The other question: can and should robots have rights? (5:39)What is the case for robot rights? (10:21)What would robot rights look like? (19:50)What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)What will human-robot interaction look like in the future? (33:20)How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)Things we can learn from science fiction for human-robot interaction and robot rights (42:55)Can and should we do anything to encourage people to see robots in a more positive light? (47:55)Why David pursued philosophy of technology over computer science more generally (52:01)Does having technical expertise give you more credibility (54:01)Shifts in thinking about robots and AI David has noticed over his career (58:03)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 1 hr 4 min
    Kurt Gray on human-robot interaction and mind perception

    Kurt Gray on human-robot interaction and mind perception

    “And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.”
    Kurt GrayWhat is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?
    Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.
    Topics discussed in the episode:
    Introduction (0:00)How did a geophysicist come to be doing social psychology? (0:51)What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)What is mind perception? (4:45)What is a mind? (7:45)Agency vs experience, or thinking vs feeling (9:40)Why do people see moral exemplars as being insensitive to pain? (10:45)How will people perceive minds in robots/AI? (18:50)Perspective taking as a tool to reduce substratism towards AI (29:30)Why don’t people like using AI to make moral decisions? (32:25)What would be the moral status of AI if they are not sentient? (38:00)The presence of robots can make people seem more similar (44:10)What can we expect about discrimination towards digital minds in the future? (48:30)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 59 min
    Thomas Metzinger on a moratorium on artificial sentience development

    Thomas Metzinger on a moratorium on artificial sentience development

    And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side. 
    Thomas MetzingerShould we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?

    Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.

    Topics discussed in the episode:
    0:00 introduction2:12 Defining consciousness and sentience9:55 What features might a sentient artificial intelligence have?17:11 Moratorium on artificial sentience development37:46 Case for a moratorium49:30 What would a moratorium look like?53:07 Social hallucination problem55:49 Incentives of politicians1:01:51 Incentives of tech companies1:07:18 Local vs global moratoriums1:11:52 Repealing the moratorium1:16:01 Information hazards1:22:21 Trends in thinking on artificial sentience over time1:39:38 What are the open problems in this field, and how might someone work on them with their career?Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
    Support the show

    • 1 hr 50 min

Customer Reviews

5.0 out of 5
11 Ratings

11 Ratings

Top Podcasts In Science

Hidden Brain
Hidden Brain, Shankar Vedantam
Something You Should Know
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Crash Course Pods: The Universe
Crash Course Pods, Complexly
Radiolab
WNYC Studios
Ologies with Alie Ward
Alie Ward

You Might Also Like

Deconstructing Yourself
Michael W. Taft
The DemystifySci Podcast
DemystifySci
Within Reason
Alex J O'Connor
The Plant Based News Podcast
Plant Based News
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Making Sense with Sam Harris
Sam Harris