How good is AI at detecting online hate?
Listen now
Description
AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
More Episodes
This week on the podcast, we bring you a conversation the hosts had last December with PhD candidate Elizabeth Seger. Elizabeth studies at The University of Cambridge and is a research assistant at the Leverhulme Centre for the Future of Intelligence. Talking about her work with The Alan Turing...
Published 08/06/21
In this episode hosts Jo Dungate and Rachel Winstanley speak to Andrew Holding, a Senior Research Associate at Cancer Research UK's (CRUK) Cambridge Institute and Turing Fellow. Andrew discusses how his research is using machine learning to understand the biology that underlies breast cancer to...
Published 07/23/21
The hosts chat with to Professor Robert Foley, who works on Human Evolution at the University of Cambridge and is a Fellow of The Alan Turing Institute. The conversation takes a broad view of how our understanding of human evolution has changed in recent decades and focusses in on the Turing...
Published 07/09/21